Skip to main content

Hugging Face Autotrain

๐Ÿค— AutoTrain is a no-code tool for training state-of-the-art models for Natural Language Processing (NLP) tasks, for Computer Vision (CV) tasks, and for Speech tasks and even for Tabular tasks.

Weights & Biases is directly integrated into ๐Ÿค— AutoTrain, providing experiment tracking and config management. It's as easy as using a single parameter in the CLI command for your experiments!

An example of how the metrics of your experiment are logged
An example of how the metrics of your experiment are logged.

Getting Startedโ€‹

First, we need to install autotrain-advanced and wandb.

pip install --upgrade autotrain-advanced wandb

Getting Started: Fine-tuning an LLMโ€‹

To demonstrate these changes we will fine-tune an LLM on a math dataset and try achieving SoTA result in pass@1 on the GSM8k Benchmarks.

Preparing the Datasetโ€‹

๐Ÿค— AutoTrain expects your CSV custom dataset in a certain format to work properly. Your training file must contain a "text" column on which the training will be done. For best results, the "text" column should have data in the ### Human: Question?### Assistant: Answer. format. A great example for the kind of dataset AutoTrain Advanced expects would be timdettmers/openassistant-guanaco. However, if you observe the MetaMathQA dataset, there are 3 columns - "query", "response" and "type". We will preprocess this dataset by removing the "type" column and combining the content of the "query" and "response" columns under one "text" column with the ### Human: Query?### Assistant: Response. format. The resulting dataset is rishiraj/guanaco-style-metamath and it will be used for training.

Training using Autotrain Advancedโ€‹

We can start training using the Autotrain Advanced CLI. To leverage the logging functionality, we simply use the --log argument. Specifying --log wandb will seamlessly log your results to a W&B run.

autotrain llm \
--train \
--model HuggingFaceH4/zephyr-7b-alpha \
--project-name zephyr-math \
--log wandb \
--data-path data/ \
--text-column text \
--lr 2e-5 \
--batch-size 4 \
--epochs 3 \
--block-size 1024 \
--warmup-ratio 0.03 \
--lora-r 16 \
--lora-alpha 32 \
--lora-dropout 0.05 \
--weight-decay 0.0 \
--gradient-accumulation 4 \
--logging_steps 10 \
--fp16 \
--use-peft \
--use-int4 \
--merge-adapter \
--push-to-hub \
--token <huggingface-token> \
--repo-id <huggingface-repository-address>
An example of how all the configs of your experiment are saved.
An example of how all the configs of your experiment are saved.

More Resourcesโ€‹

Was this page helpful?๐Ÿ‘๐Ÿ‘Ž