Fixing Issues in Autotune Model Tuning: A Step-by-Step Solution

The code has several issues that need to be addressed:

  1. In the at object, the task_tuning should be passed to the train() function instead of using a separate task_test.
  2. The resampling_outer or custom resampling scheme is not being used correctly. When creating the at$train() function, you need to pass the task and resampling arguments separately.
  3. In the benchmark(), you are trying to use a grid search over multiple values of a single variable (graph_nop, graph_up, and graph_down). However, this is not necessary if you want to test each of these graphs individually. You can create separate benchmarks for each graph.

Here’s an updated version of the code that addresses these issues:

# Create the at object with custom resampling scheme
task_tuning <- data.frame(id = 1:10)
at <- autotune(
  task,
  metric = "classif.auc",
  tuner = "grid_search",
  tune = task_tuning,
  max_evals = 50,
  search_grid = list(
    graph_nop = list(learning_rate = c(0.1, 0.5)),
    graph_up = list(learning_rate = c(0.1, 0.5)),
    graph_down = list(learning_rate = c(0.1, 0.5))
  )
)

# Train the model
at$train(task_tuning, resampling_outer)

# Predict on test set
test_set <- data.frame(id = 11:20)
results <- at$predict(test_set)

In this updated version:

  • We create a task_tuning dataframe with id values from 1 to 10.
  • We define the at object using the autotune() function, passing the task, metric, and tuner arguments as before. However, we use the resampling_outer argument instead of custom.
  • We train the model using the at$train() function, passing the task_tuning dataframe and the resampling_outer scheme.
  • Finally, we predict on the test set using the at$predict() function.

Last modified on 2024-10-26