MindSpore易点通·精讲系列–数据集加载之MindDataset
本文开发环境
本文内容摘要
在前面的文章中,我们介绍了ImageFolderDataset、CSVDataset及TFRecordDataset三个数据集加载API。本文为数据集加载部分的最后一篇文章(当然,如果后续读者有需要,再考虑补充其他API精讲),我们将介绍MindSpore中官方数据格式MindRecord加载所涉及的API的MindDataset。
一个完整的机器学习工作流包括数据集读取(可能包含数据处理)、模型定义、模型训练、模型评估。如何在工作流中更好的读取数据,是各个深度学习框架需要解决的一个重要问题。为此,TensorFlow推出了TFRecord数据格式,而MindSpore给出的解决方案就是MindRecord。在正式开始本文的讲解之前,先来看看MindRecord数据格式的特点:
老传统,先看官方文档。
下面对官方文档中的参数,做简单解读:
本文使用的是THUCNews数据集,如果需要将该数据集用于商业用途,请联系数据集作者。
数据集启智社区下载地址
在上面API解读中,我们讲到MindDasetset读取的是MindRecord文件,下面就来介绍一下如何生成MindRecord数据文件。
MindRecord数据文件生成可以简单包含以下几个部分(非顺序):
下面我们基于THUCNews数据集,来生成MindRecord数据。
import codecsimport osimport reimport numpy as npfrom collections import Counterfrom mindspore.mindrecord import FileWriterdef get_txt_files(data_dir): cls_txt_dict = {} txt_file_list = [] # get files list and class files list. sub_data_name_list = next(os.walk(data_dir))[1] sub_data_name_list = sorted(sub_data_name_list) for sub_data_name in sub_data_name_list: sub_data_dir = os.path.join(data_dir, sub_data_name) data_name_list = next(os.walk(sub_data_dir))[2] data_file_list = [os.path.join(sub_data_dir, data_name) for data_name in data_name_list] cls_txt_dict[sub_data_name] = data_file_list txt_file_list.extend(data_file_list) num_data_files = len(data_file_list) print("{}: {}".format(sub_data_name, num_data_files), flush=True) num_txt_files = len(txt_file_list) print("total: {}".format(num_txt_files), flush=True) return cls_txt_dict, txt_file_listdef get_txt_data(txt_file): with codecs.open(txt_file, "r", "UTF8") as fp: txt_content = fp.read() txt_data = re.sub("\s+", " ", txt_content) return txt_datadef build_vocab(txt_file_list, vocab_size=7000): counter = Counter() for txt_file in txt_file_list: txt_data = get_txt_data(txt_file) counter.update(txt_data) num_vocab = len(counter) if num_vocab < vocab_size - 1: real_vocab_size = num_vocab + 2 else: real_vocab_size = vocab_size # pad_id is 0, unk_id is 1 vocab_dict = {word_freq[0]: ix + 1 for ix, word_freq in enumerate(counter.most_common(real_vocab_size - 2))} print("real vocab size: {}".format(real_vocab_size), flush=True) print("vocab dict:
{}".format(vocab_dict), flush=True) return vocab_dictdef make_mindrecord_files( data_dir, mindrecord_dir, vocab_size=7000, min_seq_length=10, max_seq_length=800, num_train_shard=16, num_test_shard=4): # get txt files cls_txt_dict, txt_file_list = get_txt_files(data_dir=data_dir) # map word to id vocab_dict = build_vocab(txt_file_list=txt_file_list, vocab_size=vocab_size) # map class to id class_dict = {class_name: ix for ix, class_name in enumerate(cls_txt_dict.keys())} data_schema = { "seq_ids": {"type": "int32", "shape": [-1]}, "seq_len": {"type": "int32", "shape": [-1]}, "seq_cls": {"type": "int32", "shape": [-1]} } train_file = os.path.join(mindrecord_dir, "train.mindrecord") test_file = os.path.join(mindrecord_dir, "test.mindrecord") train_writer = FileWriter(train_file, shard_num=num_train_shard, overwrite=True) test_writer = FileWriter(test_file, shard_num=num_test_shard, overwrite=True) train_writer.add_schema(data_schema, "train") test_writer.add_schema(data_schema, "test") # indexes = ["seq_ids", "seq_len", "seq_cls"] # train_writer.add_index(indexes) # test_writer.add_index(indexes) pad_id = 0 unk_id = 1 num_samples = 0 num_train_samples = 0 num_test_samples = 0 train_samples = [] test_samples = [] for class_name, class_file_list in cls_txt_dict.items(): class_id = class_dict[class_name] num_class_pass = 0 for txt_file in class_file_list: txt_data = get_txt_data(txt_file=txt_file) txt_len = len(txt_data) if txt_len < min_seq_length: num_class_pass += 1 continue if txt_len > max_seq_length: txt_data = txt_data[:max_seq_length] txt_len = max_seq_length word_ids = [] for word in txt_data: word_id = vocab_dict.get(word, unk_id) word_ids.append(word_id) for _ in range(max_seq_length - txt_len): word_ids.append(pad_id) num_samples += 1 sample = { "seq_ids": np.array(word_ids, dtype=np.int32), "seq_len": np.array(txt_len, dtype=np.int32), "seq_cls": np.array(class_id, dtype=np.int32)} if num_samples % 10 == 0: train_samples.append(sample) num_train_samples += 1 if num_train_samples % 10000 == 0: train_writer.write_raw_data(train_samples) train_samples = [] else: test_samples.append(sample) num_test_samples += 1 if num_test_samples % 10000 == 0: test_writer.write_raw_data(test_samples) test_samples = [] if train_samples: train_writer.write_raw_data(train_samples) if test_samples: test_writer.write_raw_data(test_samples) train_writer.commit() test_writer.commit() print("num samples: {}".format(num_samples), flush=True) print("num train samples: {}".format(num_train_samples), flush=True) print("num test samples: {}".format(num_test_samples), flush=True)def main(): data_dir = "/Users/kaierlong/Documents/DownFiles/tmp/009_resources/THUCNews" mindrecord_dir = "/Users/kaierlong/Documents/DownFiles/tmp/009_resources/mindrecords" make_mindrecord_files(data_dir=data_dir, mindrecord_dir=mindrecord_dir)if __name__ == "__main__": main()
get_txt_files、get_txt_data和build_vocab不再展开,这里重点介绍make_mindrecord_files。
将3.1.1中的代码保存到文件generate_mindrecord.py,使用如下命令:
注意替换代码中的data_dir和mindrecord_dir
python3 generate_mindrecord.py
在mindrecord数据目录下,使用tree . 命令查看生成的数据情况。内容如下:
说明:
生成的Mindrecord训练数据文件为16个,即代码中对应的参数num_train_shard。生成的Mindrecord测试数据文件为4个,即代码中对应的参数num_test_shard。数据文件的前缀如代码中train_file和test_file。这里也再次说明FileWriter中的file_name参数并非具体的数据文件名。
.├── test.mindrecord0├── test.mindrecord0.db├── test.mindrecord1├── test.mindrecord1.db├── test.mindrecord2├── test.mindrecord2.db├── test.mindrecord3├── test.mindrecord3.db├── train.mindrecord00├── train.mindrecord00.db├── train.mindrecord01├── train.mindrecord01.db├── train.mindrecord02├── train.mindrecord02.db├── train.mindrecord03├── train.mindrecord03.db├── train.mindrecord04├── train.mindrecord04.db├── train.mindrecord05├── train.mindrecord05.db├── train.mindrecord06├── train.mindrecord06.db├── train.mindrecord07├── train.mindrecord07.db├── train.mindrecord08├── train.mindrecord08.db├── train.mindrecord09├── train.mindrecord09.db├── train.mindrecord10├── train.mindrecord10.db├── train.mindrecord11├── train.mindrecord11.db├── train.mindrecord12├── train.mindrecord12.db├── train.mindrecord13├── train.mindrecord13.db├── train.mindrecord14├── train.mindrecord14.db├── train.mindrecord15└── train.mindrecord15.db0 directories, 40 files
在3中我们讲解了如何生成MindRecord数据,本节就来讲解如何加载生成的MindRecord数据。
加载MindRecord数据需要用2中提到的MindDataset数据加载接口。
为保证复现结果一致,shuffle设置为了False。
import osfrom mindspore.dataset import MindDatasetdef create_mindrecord_dataset(mindrecord_dir, train_mode=True): if train_mode: file_prefix = os.path.join(mindrecord_dir, "train.mindrecord00") else: file_prefix = os.path.join(mindrecord_dir, "test.mindrecord0") dataset = MindDataset(dataset_files=file_prefix, columns_list=None, shuffle=False) for item in dataset.create_dict_iterator(): print(item, flush=True) breakdef main(): mindrecord_dir = "/Users/kaierlong/Documents/DownFiles/tmp/009_resources/mindrecords" create_mindrecord_dataset(mindrecord_dir=mindrecord_dir, train_mode=True)if __name__ == "__main__": main()
在代码解读部分,重点讲解一下,MindDataset中的dataset_files传入值。在3.1.1小节中,我们将num_train_shard和num_test_shard分别设置为了16和4。细心的读者可能发现3.2生成数据部分中生成的数据文件的最后的数字部分有所不同,test部分的数据是0、1、2、3结尾,而train部分的数据是00、01、...结尾。这就导致本节加载代码中dataset_files的传入值在针对train和test数据是不一致的,具体参考上文代码。如果train数据强行使用train.mindrecord0,那么会报错,具体报错内容参见5.2问题2。
将4.1.1中的代码保存到load_mindrecord.py文件中,运行如下命令:
python3 load_mindrecord.py
输出内容如下:
说明:
读取数据成功,包含三个字段:seq_cls、seq_ids、seq_len,且相应字段的shape与生成部分一致。
{'seq_cls': Tensor(shape=[1], dtype=Int32, value= [0]), 'seq_ids': Tensor(shape=[800], dtype=Int32, value= [ 40, 80, 289, 400, 80, 163, 2239, 288, 413, 94, 309, 429, 3, 890, 664, 2941, 582, 539, 14, ...... 55, 7, 5, 65, 7, 24, 40, 8, 40, 80, 1254, 396, 566, 276, 96, 42, 4, 73, 803, 857, 72, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'seq_len': Tensor(shape=[1], dtype=Int32, value= [742])}
补充:
如果加载中MIndRecord数据文件过多,可能会导致报错,报错内容参见5.3问题3。这时可使用命令:
# ulimit -n ${num}ulimit -n 1024
临时修改到能正常加载的值即可。
Traceback (most recent call last): File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_make.py", line 167, in main() File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_make.py", line 163, in main make_mindrecord(data_dir=data_dir, mindrecord_dir=mindrecord_dir) File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_make.py", line 98, in make_mindrecord train_writer.add_index(indexes) File "/Users/kaierlong/Pyenvs/env_ms_1.7.0/lib/python3.9/site-packages/mindspore/mindrecord/filewriter.py", line 223, in add_index raise MRMDefineIndexError("Failed to set field {} since it's not primitive type.".format(field))mindspore.mindrecord.common.exceptions.MRMDefineIndexError: [MRMDefineIndexError]: Failed to define index field. Detail: Failed to set field seq_ids since it's not primitive type.
解答:
The index fields should be primitive type. e.g. int/float/str.
Traceback (most recent call last): File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 36, in main() File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 32, in main create_mindrecord_dataset(mindrecord_dir=mindrecord_dir, train_mode=True) File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 23, in create_mindrecord_dataset dataset = MindDataset(dataset_files=file_prefix, columns_list=None, shuffle=False) File "/Users/kaierlong/Pyenvs/env_mix_dl/lib/python3.9/site-packages/mindspore/dataset/engine/validators.py", line 994, in new_method check_file(dataset_file) File "/Users/kaierlong/Pyenvs/env_mix_dl/lib/python3.9/site-packages/mindspore/dataset/core/validator_helpers.py", line 578, in check_file raise ValueError("The file {} does not exist or permission denied!".format(dataset_file))ValueError: The file /Users/kaierlong/Documents/DownFiles/tmp/009_resources/mindrecords/train.mindrecord0 does not exist or permission denied!
解答:
参见4.1.2部分
Line of code : 247File : /Users/jenkins/agent-working-dir/workspace/Compile_CPU_ARM_MacOS_PY39/mindspore/mindspore/ccsrc/minddata/mindrecord/io/shard_reader.cc(env_ms_1.7.0) [kaierlong@Long-De-MacBook-Pro-16]: ~/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01$ python3 04_mindrecord_load.pyTraceback (most recent call last): File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 36, in main() File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 32, in main create_mindrecord_dataset(mindrecord_dir=mindrecord_dir, train_mode=True) File "/Users/kaierlong/Codes/OpenI/kaierlong/Dive_Into_MindSpore/code/chapter_01/04_mindrecord_load.py", line 25, in create_mindrecord_dataset for item in dataset.create_dict_iterator(): File "/Users/kaierlong/Pyenvs/env_ms_1.7.0/lib/python3.9/site-packages/mindspore/dataset/engine/validators.py", line 971, in new_method return method(self, *args, **kwargs) File "/Users/kaierlong/Pyenvs/env_ms_1.7.0/lib/python3.9/site-packages/mindspore/dataset/engine/datasets.py", line 1478, in create_dict_iterator return DictIterator(self, num_epochs, output_numpy) File "/Users/kaierlong/Pyenvs/env_ms_1.7.0/lib/python3.9/site-packages/mindspore/dataset/engine/iterators.py", line 95, in __init__ offload_model = offload.GetOffloadModel(consumer, self.__ori_dataset.get_col_names()) File "/Users/kaierlong/Pyenvs/env_ms_1.7.0/lib/python3.9/site-packages/mindspore/dataset/engine/datasets.py", line 1559, in get_col_names self._col_names = runtime_getter[0].GetColumnNames()RuntimeError: Unexpected error. Invalid file, failed to open files for reading mindrecord files. Please check file path, permission and open files limit(ulimit -a): /Users/kaierlong/Documents/DownFiles/tmp/009_resources/mindrecords/train.mindrecord11Line of code : 247File : /Users/jenkins/agent-working-dir/workspace/Compile_CPU_ARM_MacOS_PY39/mindspore/mindspore/ccsrc/minddata/mindrecord/io/shard_reader.cc
解答:
注意:根据设备具体情况确定${num}值。
# ulimit -n ${num}ulimit -n 1024
修改前,使用ulimit -a查看,内容如下:
core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedfile size (blocks, -f) unlimitedmax locked memory (kbytes, -l) unlimitedmax memory size (kbytes, -m) unlimitedopen files (-n) 256pipe size (512 bytes, -p) 1stack size (kbytes, -s) 8176cpu time (seconds, -t) unlimitedmax user processes (-u) 5333virtual memory (kbytes, -v) unlimited
修改后,使用ulimit -a查看,内容如下:
core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedfile size (blocks, -f) unlimitedmax locked memory (kbytes, -l) unlimitedmax memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 1stack size (kbytes, -s) 8176cpu time (seconds, -t) unlimitedmax user processes (-u) 5333virtual memory (kbytes, -v) unlimited
本文讲解了MindSpore官方数据格式MindRecord的生成及数据集加载涉及的MindDataset的使用。对于数据生成,笔者根据自己的经验,给出了简单的步骤供读者参考;对于数据读取,笔者同样根据自己的经验总结了几个常见的错误以供读者避坑。
本文为原创文章,版权归作者所有,未经授权不得转载!
留言与评论(共有 0 条评论) “” |