Glossary

Typical findings for BERT-like models:

**Layer 0 (embeddings)**: Encodes surface features (word identity, position). POS tagging probes already achieve moderate accuracy. - **Layers 1-4**: Syntactic information (POS tags, dependency relations, constituency) is maximally represented. Probing accuracy for syntactic tasks peaks in this rang

Learn More

Related Terms