Recently, I was asked to solve an interesting problem: The problem of Sorting Array:
An array A contains unique n-elements, whose values are integers. The length (n) of A is ranged from 2 to10.
A sorted array B in ascending order. The length of array B must be the same as the length of array A.
Examples: A = [3,0] -> B = [0,3], A = [1,3,2] -> B = [1,2,3], A = [5,9,1,3,7] -> B = [1,3,5,7,9]
Array sorting is not a new problem. There are many sorting algorithms such as Straight Insertion, Shell Sort, Bubble Sort, Quick Sort, Selection Sort, Heap Sort, etc. The problem above becomes much more interesting if we consider it as a Combinatorial Optimization problem. Here, various Machine Learning approaches can be applied.
“Combinatorial Optimization is a category of problems which requires optimizing a function over a combination of discrete objects and the solutions are constrained. Examples include finding shortest paths in a graph, maximizing value in the Knapsack problem and finding boolean settings that satisfy a set of constraints. Many of these problems are NP-Hard, which means that no polynomial time solution can be developed for them. Instead, we can only produce approximations in polynomial time that are guaranteed to be some factor worse than the true optimal solution.”
Source: Recent Advances in Neural Program Synthesis (https://arxiv.org/abs/1802.02353)
The traditional solvers are often relying on handcrafted designs to make decisions. In recent years, many Machine Learning (ML) techniques have been used to solve the combinatorial optimization problems. The related technologies vary from supervised learning techniques to modern reinforcement learning techniques.
Using the above sorting list problem, we will see how the problem can be solved using different ML techniques.
In this series, we will start with some supervised techniques, then we’ll apply the neuro-evolution, finally using some modern RL techniques.
Part 1: Supervised learning: Gradient Boosting, Fully Connected Neural Networks, SeqtoSeq.
Part 2: Deep Neuro-Evolution: NEAT, Evolution Strategies, Genetic Algorithms.
Part 3: Reinforcement Learning: Deep Q-Network, Actor-Critic, PPO with Pointer network and Attention based model.
(Note: Enable Colab GPU to speed up running time)
Supervised machine learning algorithms are designed to learn by example. If we want to use Supervised learning, we have to have data.
First, we will generate 3 data sets with different sizes: 1000, 5000, 50000. Then we will use some models to train with this data sets. Then, we will compare their sorting abilities after the learning process.
1.Generate training data:
How to generate data? One possible approach is: if we consider each element of the input list as a feature, and each element of the sorted list is a label, we can easily convert the data back to a tabular form
Then we can use any multi-label regression or multi-label classification models for this training data.
2. Multi-label regression
For this Tabular dataset, I will use 2 common techniques : Gradient boosting (use XGB lib) and simple Fully connected neural networks (FCNNs).
from sklearn.multioutput import MultiOutputRegressor
from xgboost import XGBRegressor
Tiny Machine Learning (TinyML)  is, unsurprisingly, a machine learning technique but this technique is often utilized in building machine learning applications, which require high performance but have limited hardware. a tiny neural network on a microcontroller with really low power requirements (sometimes <1mW).
Figure 1: Tiny ML, the next AI revolution 
TinyML is often implemented in low energy systems such as microcontrollers or sensors to perform automated tasks. One trivial example is Internet of Things (IoT) devices. However, The biggest challenge in implementing TinyML is that it required “full-stack” engineers or data scientists who have profound knowledge in building hardware, design system architecture, developing software, and applications.
TinyML, IoT and embedded system
In , Internet of things (IoT) reflects the network of physical objects (a.k.a, things) that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the Internet. Therefore, most IoT devices should be applied TinyML to enhance their data collection and data processing. In other words, as argued by many machine learning experts, the relationship between TinyML, IoT, and embedded systems will be a long-lasting relationship (TinyML belongs to IoT).
Figure 2: One commercial application of TinyML in a smart house 
In the future, the era of information explosion, TinyML enables humans to deliver many brilliant applications, that help us reduce stress in processing data. Some examples include:
In agriculture: Profit losses due to animal illnesses can be reduced by using wearable devices. These smart sensors can help to monitor health vitals such as heart rate, blood pressure, temperature, etc. and TinyML will be useful in making prediction on the onslaught of disease and epidemics
In industry: TinyML can prevent downtime due to equipment failure by enabling real-time decisions without human interaction in the manufacturing sector. It can signal workers to perform preventative maintenance when necessary, based on equipment conditions.
In retail: TinyML can help to increase profits in indirect ways by providing effective means for warehouse or store monitoring. As smart sensors will possibly become popular in the future, they could be utilized in small stores, supermarkets, or hypermarkets to monitor shelves in-store. TinyML will be definitely useful in processing those data and prevent items from becoming out of stock. Humans will enjoy endless amusement came from these ML-based applications for the economic sector.
In mobility: TinyML will help sensors have more power in ingesting real-time traffic data. Once those sensors are applied in reality, humans will be no longer worry about traffic-related issues (such as traffic jams, traffic accidents)
Imagine when all sensors in the embedded systems mentioned in the above applications are connected in a super-fast Internet connection, every TinyML algorithm will be controlled by a giant ML system. That is a time when humans can take advantage of computer power in performing boring tasks. We, certainly, feel happier, have more chances for our family, and have more time to come up with important decisions.
First glance at the potential of TinyML
According to a survey done by ABI , by 2030, there are almost 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers. In addition, in  Silent Intelligence also predicts that TinyML can reach more than $ 70 billion in economic value at the end of 2025. From 2016 to 2020, the number of microcontrollers (MCU) was increasing rapidly, and this figure is predicted to rise in the next 3 years.
What is AWS Recognition?
Amazon Recognition is a service that makes it easy to add powerful, image and video-based, visual analysis to your applications.
Recognition Image lets you easily build powerful applications to search, verify, and organize millions of images.
Recognition Video lets you extract motion-based context from stored or live stream videos and helps you analyze them.
You just provide an image or video to the Recognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well.
Amazon Recognition also provides highly accurate facial analysis and facial recognition. You can detect, analyze, and compare faces for a wide variety of use cases, including user verification, cataloging, people counting, and public safety.
Amazon Recognition is a HIPAA eligible service
You need to ensure that the Amazon S3 bucket you want to use is in the same region as your Amazon Recognition API endpoint.
How does it work?
Amazon Recognition provides two API sets, they are Amazon Recognition Image for analyzing images and Amazon Recognition Video, for analyzing videos.
Both API sets perform detection and recognition analysis of images and videos.
Amazon Recognition Video can be used to track the path of people in a stored video.
Amazon Recognition Video to searching a streaming video for persons whose facial descriptions match facial descriptions already stored by Amazon Recognition.
RecognizeCelebrities API returns information for up to 100 celebrities detected in an image.
Use cases for AWS recognition
Searchable image and video libraries
Amazon Recognition makes images and stored videos searchable
Face-based user verification
It can be used in building access or similar applications, compares a live image to a reference image
Sentiment and demographic analysis
Amazon Recognition detects emotions such as happiness, sadness, or surprise, and demographic information such as gender from facial images.
Recognition can analyze images and send the emotion and demographic attributes to Amazon Redshift for periodic reporting on trends such as in-store locations and similar scenarios.
Images, Stored Videos, and Streaming videos can be searches for faces that match those in a face collection A face collection is an index of faces that you own and manage
Unsafe Content Detection
Amazon Recognition can detect explicit and suggestive adult content in images and in videos
For example, social and dating sites, photo-sharing platforms, blogs and forums, apps for children, e-commerce sites, entertainment, and online advertising services.
Amazon Recognition can recognize thousands of celebrities (politicians, sports, business, entertainment, and media) within supplied images and in videos.
It detecting text in an image allows for extracting textual content from images.
Integrate powerful image and video recognition into your apps
Amazon Recognition removes the complexity of building image recognition capabilities into applications by making powerful and accurate analysis available with a simple API.
Deep learning-based image and video analysis
Recognition uses deep learning technology to accurately analyze images, find and compare faces in images, and detect objects and scenes within images and videos.
Scalable image analysis
Amazon Recognition enables for the analysis of millions of images.
This allows for curating and organizing massive amounts of visual data.
Clients pay for the images and videos they analyze and the face metadata that stored. There are no minimum fees or upfront commitments.
With data that have time-related information, time features can be created to to possibly add more information to the models.
Since how to consider time series for machine learning is a broad topic, this article only aims to introduced basic ways to create time features for those models.
Type of data that is expected for this application
It is expected that transaction data type or any kind that similar to it will be the common one for this application. Other kinds of data that have timestamps information for each data points should also be applicable to some extent of this approach well.
Considering before attempt: a need ofanalyzing the problem and scope
For data with time element, it can be presented as a time series, which is how a set of data points described an entity are follow ordered indexes of time. One aspect considered for time series is that observations is expected to depend on previous one in time sequence, with the later is correlated to the one before. In those cases, using time series models for forecasting is a straightforward approach to use this data. Another way to approach it is to use feature engineering to transform data to have features that can be used for supervised machine learning model, which is the focus of this article.
Using time series model or adapting machine learning model is depended. In some cases, domain knowledge or business requirement will influence this decision. It is better to analyze the problem first to see the need of using either one or both type of models.
Regardless of the domain knowledge or business requirement aspects, the approach decision should have always considering the efficiency the approach will bring for in term of accuracy and computation cost.
A first preprocessing step to have a first set of time features: extracting time information from timestamp
The most straightforward thing to do is to extract basic time units, which for instance are hours, date, month, year into separates features. Another kind of information that can also be extracted is the characteristic of the time period, which could be whether the time is at a part of days (morning, afternoon), is weekend or not or is it a holiday, etc.
In some business requirements or domains’ aspects, those initial features at this level is already needed to see if the value of observation is follow those factors or not. For example, the data is the record of timestamps of customer visiting a shop and their purchase. There is a need to know at which hours, date, month… a customer would come and purchase so that follow up actions can be made to increase sales.
Regarding feature engineering for time data, the well-known technique that commonly used is aggregate features by taking statistics (variance, max, min, etc.) of the set of values grouped by a desired time unit: hours, days, months…
Apart from that, a time window could be defined and compute aggregate by rolling or expanding time window.
- Rolling: have a fixed time window size and to predict a value for the data point at a time, features will be computed from aggregating backward number of time steps according to the time window
- Expanding: from the data point, the window will be the whole record of past time steps.
There are also two aspects of aggregating:
- Aggregating to create new features for the current data points. For the first case, the model is considered to include the time series characteristic, meaning a moment will likely to be related by other moments from the recent past.
- Aggregating to create a new set of data points with corresponding new set of features from the current ones. For the second one, the new number of data points considering for the model is changed and each new data point is the summary of information from a subset of initial data points. As a results, objects for the models may be shifted like being mentioned in considering before part. If the data only about information record of one entity, or in other word only contains one time series of an entity, through this techniques, the new computed data points can be the summary of other features’ value in the chosen time unit. On the other hand, if there are more entities observed in the data set, each new data points is then the summary information of each observed entities.
How to decide on the focus objects for the problem and the approach is situational but for fresh problem and fresh data with no specific requirement or prior domain knowledge, it is better to consider all of them for the model and execute feature selection to see if the created time features is of any value.
Dealing with hours of a day – Circular data
For some needs, specific time of a day is required to be focus at. A use case of of detecting fraud transactions is a good example for this. To find something like the most frequent time that a kind of behaviour is performed for instance, using arithmetic mean mean may be misleading and is not a good representation. An important point need to be considered is that hours of day is a circular data and it should be represented on a circular axis with its’ value between 0 to 2π. To have a better representation of the mean, using von Mises distribution to have periodic mean is a suitable approach for this situation (Mishtert, 2019).
Validation for the model
Before the model building, a validation set is needed to be selected from the data first. In the usual cases, to avoid overfitting data will be randomly shuffled and then will be divided into training set and validation set. However, for this kind of situations it shouldn’t be done so to avoid the mistake of having past data in the validation and the future data in the training, in other words using future data to predict the past.
In computer science, most of algorithms run to completion: they provide a single answer after performing some fixed amount of computation. But nowadays, in machine learning generation, the models take long time to train and predict the result, the user may wish to terminate the algorithm prior to completion. That’s where anytime algorithm came in, an algorithm that can return a valid solution even if it is interrupted before it ends. The longer it keeps running, the better solution the user get.
What makes anytime algorithms unique is their ability to return many possible outcomes for any given input. An anytime algorithm uses many well defined quality measures to monitor progress in problem solving and distributed computing resources. While this may sound like dynamic programming, the difference is that it is fine-tuned through random adjustments, rather than sequential.
Figure 1: The expected performance of anytime algorithm
Initialize: While some algorithms start with immediate guesses, anytime algorithm take more calculated approach and have a start – up period before making any guesses
Growth direction: How the quality of the program’s “output” or result, varies as run time
Growth rate: Amount of increase with each step. Does it change constantly or unpredictably
End condition: The amount of runtime needed
- Clustering with time series
In clustering, we must compute distance or similarity between pairs of time series and we also have a lot of measurements. As many researches proved that dynamic time warping (DTW) is more robust than others, but the complexity O(N2) of DTW make computation more trouble.
With anytime algorithm, the life is easier than ever. Below is pseudocode for anytime clustering algorithm.
Algorithm [Clusters] = AnytimeClustering(Dataset) 1. aDTW = BuildApproDistMatrix(Dataset) 2. Clusters = Clustering(aDTW, Dataset) 3. Disp("Setup is done, interruption is possible") 4. O = OrderToUpdateDTW(Dataset) 5. For i = 1:Length(O) 6. aDTW(O(i)) = DTW(Dataset, O(i)) 7. if UserInterruptIsTrue() 8. Clusters = Clustering(aDTW, Dataset) 9. if UserTerminateIsTrue() 10. return 11. endif 12. endif 13. endfor 14. Clusters = Clustering(aDTW, Dataset)
Firstly, we need to approximate the distance matrix of dataset.
Secondly, we do cluster to get the very basic result.
Thirdly, we make some heuristic to find the optimal solution to update the distance matrix, after each updating, we get a better result than before.
Lastly, if the algorithm keeps running longer, it will finish the task and we can get the final optimal output.
- Learning optimal Bayesian networks
Another common example for anytime algorithm is searching optimal output in Bayesian networks. As we know that Bayesian networks is NP – hard. If we have too many variables or events, the algorithm will fail due to limited time and memory. That’s why anytime weighted A* algorithm is often applied to find an optimal result.
Figure 2: Bayesian Networks of 4 variables
Weighted A* algorithm use this cost function to minimize
f(n) = g(n) + ε * h(n)
where n is the last node on the graph, g(n) is the cost of the path from the start node to n and h(n) is a heuristic that estimates the cost of the cheapest path from n to the last node, ε gradually lowers to 1. The anytime algorithm bases on this cost function to stop and continue at any time.
In any data science project, once the variables that correspond to business objectives are defined, typically the project proceeds with the following aims: (1) to understand the data quantitatively, (2) to describe the data by producing a ‘model’ that represents adequately the data at hand, and (3) to use this model to predict the outcome of those variables from ‘future’ observed data.
Anyone dealing with machine learning is bound to be familiar with the phenomenon of overfitting, one of the first things taught to students, and probably one of the problems that will continue to shadow us in any data-centric job: when the predictive model works very well in the lab, but behaves poorly when deployed in the real world.
What is Overfitting?
Overfitting is defined as “the production of an analysis which corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably.”
Whether a regression, decision tree, or neural network, the different models are constructed differently: some depend on a few features (in the case of a linear model), some are too complex to visualize and understand at a glance (in the case of a deep neural network). Since there is inherently noise in data, the model fitting (‘learning’) process can either learn too little pattern (‘underfit’), or learn too much ‘false’ patterns discernible in the ‘noise’ of the seen data (‘overfit’) not meant to be present in the intended application, that end up distorting the model. Typically, this can be illustrated in the diagram below:
In Machine Learning (ML), training and testing are often done on different parts of data available. The models — whether they be trees, neural networks, equations — that have characterized the training data very well, are validated on a ‘different’ part of data. Overfitting is detected when the model has learned too much ‘false patterns’ from the training data that do not generalize to the validation data. As such, most ML practitioners reduce the problem of overfitting to a matter of knowing when to stop training, typically by choosing an ‘early stopping’ value around the red dotted line, illustrated in the loss vs. number of iterations graph below.
If detecting overfitting is reduced to a matter of “learning enough”, inherent to the contemporary ML training process, it is assumed to be “solved” when the model works on a portion of the held-out data. In some cases, k-fold / cross-validation is performed, which basically splits the data into k ‘folds’: one held-out test set and k-1 training sets. Given that most models do not make it to the crucible of production environment, most practitioners stop at demonstrating (in a lab-controlled environment) that the model “works”; when the numbers from cross-validation statistics check out fine, showing consistent performance across all folds.
But.. is it so simple?
In simpler or more trivial models, overfitting may not even be noticeable! However, with more rigorous testing or when the model is lucky enough to be deployed to production stage, there is usually more than one validation set (or more than one client’s data!). It is not infrequent that we see various validation/real-world data do not ‘converge’ at the same rate, making the task of choosing early stopping point more challenging. With just two different validation data sets, one can observe the following:
What about this curve?
When we get curves that look like the graphs above, it is harder to answer these questions: When does overfitting ‘begin’? How to deal with multiple loss functions? What is happening here??
Digging your way out
Textbook ML prescribes many ‘solutions’ — which might just work if you are lucky — sometimes without you knowing exactly what is the problem:
- Data augmentation:
Deep down, most overfitting problems will go away if we have abundant amount of ‘good enough’ data, which for most data scientists is a utopian dream. Limited resource makes it impossible to collect the perfect data: clean, complete, unbiased, independent and cheap. For many neural network ML pipelines, data augmentation has become de rigeur, which aim to make the model not learn characteristics unique to the dataset, by multiplying the amount of training data available. Without collecting more data, data can be ‘multiplied’ by varying them through small perturbations to make them seem different to the model. Whether or not this approach is effective will depend on what is your model learning!
Regularization term is routinely added in model fitting, to penalize overly complex models. In regression methods, L1 or L2 penalty terms are often added to encourage smaller coefficients and thus, ‘simpler’ models. In decision trees, methods such as decision tree pruning or limiting tree maximum depth, are typical ways to ‘keep model simple’. Combined with ensemble methods, e.g. bagging or boosting, they can be used to avoid overfitting and make use of multiple weak learners to arrive at a robust predictor. In DL techniques, regularization is achieved by introducing dropout layer, that will randomly turn off neurons during training.
If you have gone through all those steps outlined above and still feel somehow that you just got lucky with a particular training/validation data set, you are not alone. We need to dig deeper in order to truly understand what is happening and get out of this sticky mess, or accept the risk of deploying such limited model and set up future trap for ourselves!
Pytorch is a deep learning framework and a scientific computing package
The scientific computing aspect of PyTorch is primarily a result PyTorch’s tensor library and associated tensor operations. That means you can take advantage of Pytorch for many computing tasks, thanks to its supporting tensor operation, without touching deep learning modules.
Important to note that PyTorch tensors and their associated operations are very similar to numpy n-dimensional arrays. A tensor is actually an n-dimensional array.
Pytorch build its library around Object Oriented Programming(OOP) concept. With object oriented programming, we orient our program design and structure around objects. The tensor in Pytorch is presented by the object torch.Tensor which is created from numpy ndarray objects. Two objects share memory. This makes the transition between PyTorch and NumPy very cheap from a performance perspective.
With PyTorch tensors, GPU support is built-in. It’s very easy with PyTorch to move tensors to and from a GPU if we have one installed on our system. Tensors are super important for deep learning and neural networks because they are the data structure that we ultimately use for building and training our neural networks.
Talking a bit about history.
The initial release of PyTorch was in October of 2016, and before PyTorch was created, there was and still is, another framework called Torch which is also a machine learning framework but is based on the Lua programming language. The connection between PyTorch and this Lua version, called Torch, exists because many of the developers who maintain the Lua version are the individuals who created PyTorch. And they have been working for Facebook since then till now.
Below are the primary PyTorch modules we’ll be learning about and using as we build neural networks along the way.
Why use Pytorch for deep learning?
- PyTorch’s design is modern, Pythonic. When we build neural networks with PyTorch, we are super close to programming neural networks from scratch. When we write PyTorch code, we are just writing and extending standard Python classes, and when we debug PyTorch code, we are using the standard Python debugger. It’s written mostly in Python, and only drops into C++ and CUDA code for operations that are performance bottlenecks.
- It is a thin framework, which makes it more likely that PyTorch will be capable of adapting to the rapidly evolving deep learning environment as things change quickly over time.
- Stays out of the way and this makes it so that we can focus on neural networks and less on the actual framework.
Why PyTorch is great for deep learning research
The reason for this research suitability is that Pytorch use dynamic computational graph, in contrast with tensorfow which uses static computational graph, in order to calculate derivatives.
Computational graphs are used to graph the function operations that occur on tensors inside neural networks. These graphs are then used to compute the derivatives needed to optimize the neural network. Dynamic computational graph means that the graph is generated on the fly as the operations are created. Static graphs that are fully determined before the actual operations occur.
Make your Machine Learning team working easier, focus more on business and quick deployment with AWS managed service SageMaker.
Today, Machine Learning(ML) is resolving complex problems which make more business values for customer and many companies also apply ML to resolve robust business problems. ML have more benefit, but also more challenges to building the ML model with high accuracy. I currently working on the AI team to help the company deliver AI/ML project quickly and help Data Scientist(DS) team developing Data Pipeline, Machine Learning pipeline which helps project grow and quick delivery with high quality.
Overview of Machine Learning development
Figure1. Machine learning process
Here is the basic machine learning process which is the practice model of big companies. We are including multiple phases (business analyst, data processing, model training and deployment), multiple steps each phase and a fleet of tools that we use to result in dedicated steps.
Business problems: is including problems that challenge the business and we can use ML learning as a better solution to resolve it.
ML Problem Framing: is phase help DS and Engineering definition for ML problems, propose ML solutions, data pipeline and planing.
Data processing(Collection, integration, preparation and cleaning, visualization and analysis): This phase including multiple steps that help to prepare data for visualization and ML training.
Model Training(Feature engineering, Model training and parameter tuning, model evaluation): DS and developer will working on this phase to develop engineering features and prepare data for specific model and training model using framework such as Tensorflow, Pytorch.
When we don’t use any platform such as AWS SageMaker or Azure ML Studio then we take more time to develop with complex stack skills. We need to have many skills in compute, network, storage, ML frameworks, programming language, engineering features …
Actually, when we develop an ML model with a lack of skills and complex components that take more time to handle tasks about programming, compute and challenges for the engineering team which using the model and deploy it. In Figure 2, we have multiple instants of cloud computing (Infrastructure as a Service, Platform as a Service, Application as a Service) that provide resource type according to business necessary at the level of control. We can choose a specific instant layer for business necessary or compact of layers to meet the business objective. So, which Machine Learning projects and research environment, I would like to highly recommend for any DS and Develop should use PaaS and AaaS first to meet your business requirement first to quick delivery, reduce cost and effort. That is the main reason I want to describe AWS SageMaker service which can use as a standard platform for ML team to quickly develop/deploy ML model, focus on resolve ML business problems and improve the quality of the model.
AWS SageMaker offers the base of end to end machine learning development environment
Figure 3. AWS SageMaker benefits
When we developing an ML model, we need to take care of multiple partitions and sometimes we want to try with a new model and responsibility accuracy response as quickly as possible. So, that also depends on “Is there enough data for processing and training?” and “How many time will take for training a new model?”. AWS SageMaker is using in thousands of AI companies and developed by experts and best practices of ML which help to improve the ML process, working environment. In my opinion, when I want to focus on building a model and resolve the challenge problem then I want to make another easy at the basic level and spend more time to resolve the main problems at first. SageMaker provides me a notebook solution and a notebook is a great space that I coding and analyzing data for training. Working together with SageMaker SDK, I can easy to connect and use other resources inside of AWS such as S3 bucket and training jobs. Everything helps me quickly develop and deliver a new model. So, I want to highlight the main benefit which we got from this service and also have disadvantages.
– SageMaker provide solution for training ML model with training jobs that are distributed, elastic, high-performance computing using spot instance to save cost up to 90%, pay only for training time in seconds. (document)
– Elastic Inference: This feature help to save cost for computes need GPU to processing deep learning model such as prediction time. (document)
* 🎯 Reduce lack of skills and focus to resolve business problems. We can easy to setup training environment from notebook with “click” for elastic of CPUs/GPUs
* 🌐 Connectivity and easy to deploy
– AWS SageMaker is AWS manged service and it easy to integrate with other AWS services inside of private network. Which also impact to big data solution, ETL processing with data can be processing inside of private network and reduce cost for transfer.
– EndPoint function help DS/Develop easy to deploy trained model with “clicks” or from SDK. (document)
* 🏢 Easy to management: When working with multiple teams on AWS, more resource will come everyday and challenges with IT team to manage resources, roles which will impact to cost and security. AWS manged service will help to reduce resource we need to create.
* AWS SageMaker is a managed service, it implements with best practices and focuses on popular frameworks, sometimes it does not match your requirement and should be considered before choosing it.
* 🎿 Learning new skills and basic knowledge of AWS Cloud: When we working on AWS cloud, basic knowledge about cloud infrastructure is necessary and new knowledge with AWS managed service which you want to use.
* 👮 It also expensive than normal EC2 instance because it supports dedicated for ML, we need to choose the right resource for the development to save cost.
AWS SageMaker is the best suite service for the production environment. It helps to build a quality model and standard environment. Which reduces the risk in product development. We trade-off to get most of the benefit and quickly to achieve the goal of the team. Thank you so much for reading, please let me know if you have any concerns.