Results Journal IF was higher for fraudulent papers (p<0.001). Roughly 53% of fraudulent papers were written by a first author who had written other retracted papers (‘repeat offender’), whereas only ...
Abstract: Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation ...