Google Workspace Frontline là một giải pháp tùy chỉnh bao gồm các ứng dụng liên…
Recently, Google Cloud has launched Vertex AI to help you move machine learning (ML) from test to production faster and manage your models with confidence — accelerating your ability to improve outcomes at your organization.
But they know many of you are just getting started with ML and have a lot to learn! In parallel with building the Vertex AI platform, teams are reducing best practices content maybe to help you get up to speed. Also, Google has a dedicated event on June 10th, ML Summit App, with sessions on how to apply ML technology in your projects, as well as develop your skills in the field.
In the meantime, Google itself couldn't resist a quick lesson on hyperparameter tuning, because:
(a) it's awesome
(b) You will impress your colleagues
(c) Google Cloud has a number of unique technologies that have been tested in the field and
(d) you'll save time by getting better ML models into production faster.
On average, Vertex Vizier finds optimal parameters for complex functions in 80% fewer tests than traditional methods.
So it's amazing, but what is it?
While machine learning models automatically learn from data, they still require user-defined nodes to guide the learning process. For example, these nodes, often referred to as hyperparameters, control the trade-off between training accuracy and generalization. An example of a hyperparameter is the optimizer being used, its learning rate, regularization parameters, number of hidden layers in DN and their size.
Setting hyperparameters to their optimal values for a given data set can make a big difference in model quality. Usually, the optimal hyperparameter values are found through grid search with a small number of combinations or tedious manual testing. Hyperparameter tuning automate this work for you by searching for the best configuration of hyperparameters for optimal model performance.
Vertex Vizier allows automatic hyperparameter tuning in several ways: Hyperparameter tuning
1. “traditional”: by the way, it means finding the optimal value of the hyperparameter by measuring a single target metric is the output of the ML model. For example, Vizier chooses the number of hidden layers and their size, the optimizer, and its learning rate, with the goal of maximizing model accuracy.
2. Once the hyperparameters are evaluated, the models are trained and evaluated on parts of the data set. If the evaluation metrics are streamed to Vizier (e.g. as a function of the epoch) when the model is trained, Vizier's early stopping the algorithm can predict the final target value and recommend premature stopping of unsatisfactory trials. This conserves computing resources and speeds up convergence.
3. Usually, the models are adjusted sequentially over different data sets. Vizier's built-in features transfer learning key points from previous hyperparameter tuning studies and utilize them for faster convergence into subsequent hyperparameter tuning studies.
4. AutoML is a variant of # 1, where Vertex Vizier does both model selection and also tune the architecture/non-architecture modifier hyperparameters. AutoML usually requires more code than on Vertex Vizier (for data entry, etc.), but in most cases Vizier is the “engine” behind the process. AutoML is implemented by defining a tree format like (DAG search space), instead of a “flat” search space (as in # 1). Note that you can use the DAG search space for any other purpose where searching on the hierarchical space makes sense.
5. Sometimes you may want to optimize for more than one metric. For example, Google Cloud wants to optimize model accuracy while minimizing model latency. Vizier can find Pareto frontier, which offers a balance for many metrics, allowing the user to choose the appropriate balance. Simple example: I want to create a more accurate model, but want to minimize the delivery delay. I don't know in advance what the balance between the two metrics is. Vizier can be used to explore and draw equilibrium curves, so the user can choose the most appropriate one. For example, “a delay of 200 milliseconds will only reduce accuracy by 0.5%”