Downloading and Sampling The Dataset¶
Now that your agent can act in the environment, we should show it how to leverage human demonstrations.
To get started, let’s download the minimal version of the dataset (two demonstrations from every environment). Since there are over 20 MineRL environments, this is still a sizeable download, at about 2 GB.
Then we will sample a few state-action-reward-done tuples from the
Setting up environment variables¶
minerl package uses the
MINERL_DATA_ROOT environment variable to locate the data
directory. Please export
Downloading the MineRL Dataset with
To download the minimal dataset into
MINERL_DATA_ROOT, run the command:
python3 -m minerl.data.download
The full dataset for a particular environment, or for a particular competition (Diamond or Basalt)
can be downloaded using the
--environment ENV_NAME and
--competition COMPETITION flags.
ENV_NAME is any Gym environment name from the
For more information, run
python3 -m minerl.data.download --help.
As an example, to download the full dataset for “MineRLObtainDiamond-v0”, you can run
python3 -m minerl.data.download --environment "MineRLObtainDiamond-v0"
Sampling the Dataset with
Now we can build the dataset for
There are two ways of sampling from the MineRL dataset: the deprecated but still supported batch_iter, and buffered_batch_iter. batch_iter is the legacy method, which we’ve kept in the code to avoid breaking changes, but we have recently realized that, when using batch_size > 1, batch_iter can fail to return a substantial portion of the data in the epoch.
If you are not already using `data_pipeline.batch_iter`, we recommend against it, because of these issues
The recommended way of sampling from the dataset is:
from minerl.data import BufferedBatchIter data = minerl.data.make('MineRLObtainDiamond-v0') iterator = BufferedBatchIter(data) for current_state, action, reward, next_state, done \ in iterator.buffered_batch_iter(batch_size=1, num_epochs=1): # Print the POV @ the first step of the sequence print(current_state['pov']) # Print the final reward pf the sequence! print(reward[-1]) # Check if final (next_state) is terminal. print(done[-1]) # ... do something with the data. print("At the end of trajectories the length" "can be < max_sequence_len", len(reward))
Moderate Human Demonstrations¶
MineRL-v0 uses community driven demonstrations to help researchers develop sample efficient techniques. Some of these demonstrations are less than optimal, however others could feature bugs with the client, server errors, or adversarial behavior.
Using the MineRL viewer, you can help curate this dataset by viewing these demonstrations manually and reporting bad streams by submitting an issue to github with the following information:
The stream name of the stream in question
The reason the stream or segment needs to be modified
The sample / frame number(s) (shown at the bottom of the viewer)