Fixing Issues in Autotune Model Tuning: A Step-by-Step Solution
The code has several issues that need to be addressed:
- In the
atobject, thetask_tuningshould be passed to thetrain()function instead of using a separatetask_test. - The
resampling_outerorcustomresampling scheme is not being used correctly. When creating theat$train()function, you need to pass thetaskandresamplingarguments separately. - In the
benchmark(), you are trying to use a grid search over multiple values of a single variable (graph_nop,graph_up, andgraph_down). However, this is not necessary if you want to test each of these graphs individually. You can create separate benchmarks for each graph.
Here’s an updated version of the code that addresses these issues:
# Create the at object with custom resampling scheme
task_tuning <- data.frame(id = 1:10)
at <- autotune(
task,
metric = "classif.auc",
tuner = "grid_search",
tune = task_tuning,
max_evals = 50,
search_grid = list(
graph_nop = list(learning_rate = c(0.1, 0.5)),
graph_up = list(learning_rate = c(0.1, 0.5)),
graph_down = list(learning_rate = c(0.1, 0.5))
)
)
# Train the model
at$train(task_tuning, resampling_outer)
# Predict on test set
test_set <- data.frame(id = 11:20)
results <- at$predict(test_set)
In this updated version:
- We create a
task_tuningdataframe with id values from 1 to 10. - We define the
atobject using theautotune()function, passing thetask,metric, andtunerarguments as before. However, we use theresampling_outerargument instead ofcustom. - We train the model using the
at$train()function, passing thetask_tuningdataframe and theresampling_outerscheme. - Finally, we predict on the test set using the
at$predict()function.
Last modified on 2024-10-26