MaterialBERT for Natural Language Processing of Materials Science Texts

MDR Open Deposited
No preview available

Download the file

A BERT (Bidirectional Encoder Representations from Transformers) model, which we named “MaterialBERT,” has been generated using scientific papers in wide area of material science as a corpus. A new vocabulary list for tokenizer was generated using material science corpus. Two BERT models with different vocabulary lists for the tokenizer, one with the original one made by Google and the other newly made by the authors, were generated. Word vectors embedded during the pre-training with the two MaterialBERT models reasonably reflect the meanings of materials names in material-class clustering and in the relationship between base materials and their compounds or derivatives for not only inorganic materials but also organic materials and organometallic compounds. Fine-tuning with CoLA (The Corpus of Linguistic Acceptability) using the pre-trained MaterialBERT showed ahigher score than the original BERT.
MaterialBERT could be used as a starting point for generating a narrower domain-specific BERT model in materials science field by transfer learning.

First published at
Resource type
Date published
  • 08/08/2022
Rights statement
Licensed Date
  • 08/08/2022
Manuscript type
  • Author's original (Preprint)
Last modified
  • 10/08/2022