Publication
MaterialBERT for Natural Language Processing of Materials Science Texts
A BERT (Bidirectional Encoder Representations from Transformers) model, which we named “MaterialBERT,” has been generated using scientific papers in wide area of material science as a corpus. A new vocabulary list for tokenizer was generated using material science corpus. Two BERT models with different vocabulary lists for the tokenizer, one with the original one made by Google and the other newly made by the authors, were generated. Word vectors embedded during the pre-training with the two MaterialBERT models reasonably reflect the meanings of materials names in material-class clustering and in the relationship between base materials and their compounds or derivatives for not only inorganic materials but also organic materials and organometallic compounds. Fine-tuning with CoLA (The Corpus of Linguistic Acceptability) using the pre-trained MaterialBERT showed ahigher score than the original BERT.
MaterialBERT could be used as a starting point for generating a narrower domain-specific BERT model in materials science field by transfer learning.
- DOI
- First published at
- Creator
- Keyword
- Resource type
- Date published
- 08/08/2022
- Rights statement
- Licensed Date
- 08/08/2022
- Last modified
- 10/08/2022
Items
Thumbnail | Title | Date Uploaded | Size | Visibility | Actions |
---|---|---|---|---|---|
![]() |
MaterialBERT_README__20220808.md | 08/08/2022 | 5.7 KB | MDR Open |
|
![]() |
MaterialBERT_Dict_Pre-trained_Model.zip | 1.14 GB | MDR Open |
|
|
![]() |
MaterialBERT_Pre-trained_Model.zip | 1020 MB | MDR Open |
|
|
![]() |
Jxiv_article.zip | 09/08/2022 | 1.66 MB | MDR Open |
|