Releases: JohnSnowLabs/spark-nlp
Spark NLP 4.3.0: New HuBERT for speech recognition, new Swin Transformer for Image Classification, new Zero-shot annotator for Entity Recognition, CamemBERT for question answering, new Databricks and EMR with support for Spark 3.3, 1000+ state-of-the-art models and many more!
π’ Overview
We are very excited to release Spark NLP π 4.3.0! This has been one of the biggest releases we have ever done and we are so proud to share this with our community! π
This release extends support for another Image Classification by introducing Swin Transformer
, also extending support for speech recognition by introducing HuBERT
annotator, a brand new modern extractive transformer-based Question answering (QA) annotator for tasks like SQuAD based on CamemBERT architecture, new Databricks & EMR with support for Spark 3.3, 1000+ state-of-the-art models, and many more enhancements and bug fixes!
We are also celebrating crossing 12600+ free and open-source models & pipelines in our Models Hub. π As always, we would like to thank our community for their feedback, questions, and feature requests.
π₯ New Features
HuBERT
NEW: Introducing HubertForCTC annotator in Spark NLP π. HubertForCTC
can load HuBERT
models that match or surpasses the SOTA approaches for speech representation learning for speech recognition, generation, and compression. The Hidden-Unit BERT (HuBERT) approach was proposed for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. This annotator is compatible with all the models trained/fine-tuned by using HubertForCTC
for PyTorch or TFHubertForCTC
for TensorFlow models in HuggingFace π€
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
Swin Transformer
NEW: Introducing SwinForImageClassification annotator in Spark NLP π. SwinForImageClassification
can load transformer-based deep learning models with state-of-the-art performance in vision tasks. Swin Transformer precedes Vision Transformer (ViT) (Dosovitskiy et al., 2020) with great accuracy and efficiency. This annotator is compatible with all the models trained/fine-tuned by using SwinForImageClassification
for PyTorch or TFSwinForImageClassification
for TensorFlow models in HuggingFace π€
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
Zero-Shot for Named Entity Recognition
Zero-Shot Learning
refers to the process by which a model learns how to recognize objects (image, text, any features) without any labeled training data to help in the classification.
NEW: Introducing ZeroShotNerModel annotator in Spark NLP π. You can use the ZeroShotNerModel
annotator to construct simple questions/answers mapped to NER labels like PERSON
, NORP
and etc. We use RoBERTa for Question Answering architecture behind the hood and this allows you to use any of the 460 models
available on Models Hub to build your Zero-shot Entity Recognition with zero training dataset!
zero_shot_ner = ZeroShotNerModel.pretrained("roberta_base_qa_squad2", "en") \
.setEntityDefinitions(
{
"NAME": ["What is his name?", "What is my name?", "What is her name?"],
"CITY": ["Which city?", "Which is the city?"]
}) \
.setInputCols(["sentence", "token"]) \
.setOutputCol("zero_shot_ner")
This powerful annotator with such simple rules can detect those entities from the following input: "My name is Clara, I live in New York and Hellen lives in Paris."
+-----------------------------------------------------------------+------+------+----------+------------------+
|result |result|word |confidence|question |
+-----------------------------------------------------------------+------+------+----------+------------------+
|[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|Paris |0.5328949 |Which is the city?|
|[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Clara |0.9360068 |What is my name? |
|[My name is Clara, I live in New York and Hellen lives in Paris.]|B-CITY|New |0.83294415|Which city? |
|[My name is Clara, I live in New York and Hellen lives in Paris.]|I-CITY|York |0.83294415|Which city? |
|[My name is Clara, I live in New York and Hellen lives in Paris.]|B-NAME|Hellen|0.45366877|What is her name? |
+-----------------------------------------------------------------+------+------+----------+------------------+
CamemBERT for Question Answering
NEW: Introducing CamemBertForQuestionAnswering annotator in Spark NLP π. CamemBertForQuestionAnswering
can load CamemBERT Models with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers on top of the hidden-states output to compute span start logits and span end logits). This annotator is compatible with all the models trained/fine-tuned by using CamembertForQuestionAnswering
for PyTorch or TFCamembertForQuestionAnswering
for TensorFlow in HuggingFace π€
Models Hub
Introduces a new filter by annotator
which should help to navigate and find models easier:
βπ Improvements & Bug Fixes
- New
Date2Chunk
annotator to convertDATE
outputs coming fromDateMatcher
andMultiDateMatcher
annotators toCHUNK
that is acceptable by a wider range of annotators - Spark NLP 4.3.0 supports Apple Silicon M1 and M2 (still under experimental status until GitHub supports Apple Silicon officially). We have refactored the name
m1
tosilicon
andapple_silicon
in our code for better clarity - Add new templates for issues, docs, and feature requests on GitHub
- Add a new log4j2 properties for Spark 3.3.x coming with Log4j 2.x to control the logs on Apache Spark
- Cross compatibility for all saved pipelines for all major releases of Apache Spark and PySpark
- Relocating Spark NLP examples to the examples directory in our main repository. We will update them on each release, will keep a history of the changes for each version, adding more languages, especially more use cases with Java and Scala
- Add PyDoc documentation for
ResourceDownloader
in Python (clearCache()
,showPublicModels()
,showPublicPipelines()
, andshowAvailableAnnotators()
) - Fix calculating
delimiter id
in CamemBERT annotators. The delimiter id is actually correct and doesn't need any offset - Fix AnalysisException exception that requires a different caught message for Spark 3.3
- Fix copying existing models & pipelines on S3 before unzipping when
cache_pretrained
is defined as S3 bucket - Fix copying existing models & pipelines on GCP before unzipping when
cache_pretrained
is defined as GCP bucket - Fix
loadSavedModel()
trying to load external models for private buckets on S3 with better error handling and warnings - Enable the
params
argument in the Spark NLP start function. You can create aparams = {}
with all Spark NLP and Apache Spark configs and pass it when starting the Spark NLP session - Add support for
doc id
in CoNLL() class when trying to read CoNLL files withid
inside each document's header - Welcoming 6 new Databricks runtimes to our Spark NLP family:
- Databricks 12.0
- Databricks 12.0 ML
- Databricks 12.0 ML GPU
- Databricks 12.1
- Databricks 12.1 ML
- Databricks 12.1 ML GPU
- Welcoming 2 new EMR 6.x series to our Spark NLP family:
- EMR 6.8.0 (Apache Spark 3.3.0 / Hadoop 3.2.1)
- EMR 6.9.0 (Apache Spark 3.3.0 / Hadoop 3.3.3)
- New article for semantic similarity with Spark NLP on Play/API/Swagger/ https://medium.com/spark-nlp/semantic-similarity-with-sparknlp-da148fafa3d8
Dependencies & Code Changes
- Update Apache Spark 3.3.1 (not shipped with Spark NLP
- Update GCP to 2.16.0
- Update Scala test to 3.2.14
- Start publishing
spark-nlp-m1
Maven package asspark-nlp-silicon
- Rename all read model traits to a generic name. A new
ai
module paving a path to another DL engine - Rename TF backends to more generic DL names
- Refactor more duplicate codes in transformer embeddings
πΎ Models
Spark NLP 4.3.0 comes with 1000+ state-of-the-art pre-trained transformer models in many languages.
Featured Models
Model | Name | Lang |
---|---|---|
DistilBertForQuestionAnswering | distilbert_qa_en_de_vi_zh_es_model | xx |
DistilBertForQuestionAnswering | distilbert_qa_extractive | en |
DistilBertForQuestionAnswering | distilbert_qa_base_cased_squadv2 | xx |
RoBertaForQuestionAnswering | [roberta_qa_roberta](https://nlp.johnsnowlabs.com/2023/01... |
Spark NLP 4.2.8: Patch release
π’ Overview
Spark NLP 4.2.8 π comes with some important bug fixes and improvements. As a result, we highly recommend to update to this latest version if you are using Spark NLP 4.2.x.
As always, we would like to thank our community for their feedback, questions, and feature requests. π
β π Bug Fixes & Improvements
- Fix the issue with optional keys (labels) in metadata when using XXXForSequenceClassitication annotators. This fixes
Some(neg) -> 0.13602075
asneg -> 0.13602075
to be in harmony with all the other classifiers. #13396
before 4.2.8:
+-----------------------------------------------------------------------------------------------+
|label |
+-----------------------------------------------------------------------------------------------+
|[{category, 0, 87, pos, {sentence -> 0, Some(neg) -> 0.13602075, Some(pos) -> 0.8639792}, []}] |
|[{category, 0, 47, neg, {sentence -> 0, Some(neg) -> 0.7505674, Some(pos) -> 0.24943262}, []}] |
|[{category, 0, 17, pos, {sentence -> 0, Some(neg) -> 0.31065974, Some(pos) -> 0.6893403}, []}] |
|[{category, 0, 71, neg, {sentence -> 0, Some(neg) -> 0.5079189, Some(pos) -> 0.4920811}, []}] |
+-----------------------------------------------------------------------------------------------+
after 4.2.8:
+-----------------------------------------------------------------------------------+
|label |
+-----------------------------------------------------------------------------------+
|[{category, 0, 87, pos, {sentence -> 0, neg -> 0.13602075, pos -> 0.8639792}, []}] |
|[{category, 0, 47, neg, {sentence -> 0, neg -> 0.7505674, pos -> 0.24943262}, []}] |
|[{category, 0, 17, pos, {sentence -> 0, neg -> 0.31065974, pos -> 0.6893403}, []}] |
|[{category, 0, 71, neg, {sentence -> 0, neg -> 0.5079189, pos -> 0.4920811}, []}] |
+-----------------------------------------------------------------------------------+
- Introducing a config to skip
LightPipeline
validation forinputCols
on the Python side for projects depending on Spark NLP. This toggle should only be used for specific annotators that do not follow the convention of predefinedinputAnnotatorTypes
andoutputAnnotatorType
#13402
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.8
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.8</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.8</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.8</version>
</dependency>
spark-nlp-aarch64:
<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.8</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.8.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.8.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.8.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-4.2.8.jar
What's Changed
- Updated VIT model hub cards to remove duplicate entities by @ahmedlone127 in #13281
- Models hub by @maziyarpanahi in #13340
- 2023-01-13-finmapper_wikipedia_parentcompanies_en (#13341) by @josejuanmartinez in #13342
- Models hub legal by @josejuanmartinez in #13343
- Models hub finance by @josejuanmartinez in #13345
- Models hub internal by @Cabir40 in #13351
- added 4.2.7 HC RN by @Cabir40 in #13353
- Add new demos 29 by @agsfer in #13355
- Updated compat table jsl ocr by @albertoandreottiATgmail in #13356
- Models Hub v2.9.0 by @pabla in #13361
- Update head.html by @agsfer in #13367
- FEATURE NMH-140: Add the "Copy S3 URI" to existing documents [skip-test] by @pabla in #13368
- Update programmingLanguageSwitcherScalaPython.js by @agsfer in #13370
- Added Visual NLP 4.2.1 release notes by @albertoandreottiATgmail in #13381
- Update release notes 1 by @albertoandreottiATgmail in #13384
- Finance NLP 1.6.0 by @josejuanmartinez in #13385
- Legal NLP 1.6.0 by @josejuanmartinez in #13386
- Update release notes 1 by @albertoandreottiATgmail in #13387
- Docs/nlp lab4.6.2 by @rpranab in #13394
- Update tabs in docs by @agsfer in #13395
- Legal 1.6.0 additional model by @josejuanmartinez in #13398
- Finance NLP 1.6.0 by @josejuanmartinez in #13399
- [skip ci] Create PR 4.2.7-healthcare-docs-debe9225c540f2f95c464ed9e4be42807e431106-18 by @jsl-builder in #13372
- Fixed some md files by @Damla-Gurbaz in #13400
- Uptade ocr cards by @aymanechilah in #13407
- 428 release candidate by @maziyarpanahi in #13406
Full Changelog: 4.2.7...4.2.8
Spark NLP 4.2.7: Patch release
π’ Overview
Spark NLP 4.2.7 π comes with some important bug fixes and improvements. As a result, we highly recommend to update to this latest version if you are using Spark NLP 4.2.x.
As always, we would like to thank our community for their feedback, questions, and feature requests. π
π β Bug Fixes & Enhancements
- Fix
outputAnnotatorType
issue in pipelines withFinisher
annotator. This change addsoutputAnnotatorType
toAnnotatorTransformer
to avoid loadingoutputAnnotatorType
attribute when a stage in pipeline does not use it. - Fix the wrong sentence index calculation in metadata by annotators in the pipeline when
setExplodeSentences
param was set totrue
in SentenceDetector annotator - Fix the issue in
Tokenizer
when a custom pattern is used withlookahead/-behinds
and it has0 width
matches. This led to indexes not being calculated correctly - Fix missing to output embeddings in
.fullAnnotate()
method whenparseEmbeddings
param was set toTrue/true
- Fix broken links to the Python API pages, as the generation of the PyDocs was slightly changed in a previous release. This makes the Python APIs accessible from the Annotators and Transformers pages like before
- Change default values of
explodeEntities
andmergeEntities
parameters totrue
in GraphExctraction annotator - Better error handling when there are empty paths/relations in
GraphExctraction
annotator. New message will better guide the user on how to configureGraphExtraction
to output meaningful relationships - Removed the duplicated definition of method
setWeightedDistPath
fromContextSpellCheckerApproach
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.7
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.7</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.7</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.7</version>
</dependency>
spark-nlp-aarch64:
<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.7</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.7.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.7.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.7.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-4.2.7.jar
What's Changed
@dcecchini @Cabir40 @agsfer @gadde5300 @bunyamin-polat @rpranab @jdobes-cz @josejuanmartinez @diatrambitas @maziyarpanahi
Full Changelog: 4.2.6...4.2.7
Spark NLP 4.2.6: Patch release
β Improvements
- Updating Spark & PySpark dependencies from 3.2.1 to 3.2.3 in provided scripts and in all the documentation
π Bug Fixes
- Fix the broken TypedDependencyParserApproach and TypedDependencyParserModel annotators used in Python (this bug was introduced in 4.2.5 release)
- Fix the broken Python API documentation
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.6
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.6
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.6
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.6
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.6
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.6
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.6
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.6
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.6
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.6</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.6</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.6</version>
</dependency>
spark-nlp-aarch64:
<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.6</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.6.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.6.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.6.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-4.2.6.jar
What's Changed
Contributors
'@gadde5300 @diatrambitas @Cabir40 @josejuanmartinez @danilojsl @jsl-builder @DevinTDHa @maziyarpanahi @dcecchini @agsfer '
Full Changelog: 4.2.5...4.2.6
Spark NLP 4.2.5: New CamemBERT for sequence classification, better pipeline validation in LightPipeline, new Databricks 11.3 runtime, new EMR 6.8/6.9 versions with Spark 3.3, updated notebooks with latest TensorFlow 2.11, 400+ state-of-the-art models and many more!
π’ Overview
Spark NLP 4.2.5 π comes with a new CamemBERT for sequence classification annotator (multi-class & multi-label), new pipeline validation for LightPipeline in Python, 26 updated noteooks to use the latest TensorFlow and Transformers libraries, support for new Databricks 11.3 runtime, support for new EMR versions of 6.8 and 6.9 (only EMR versions with Spark 3.3), over 400+ state-of-the-art multi-lingual pretrained models, and bug fixes.
Do not forget to visit Models Hub with over 11700+ free and open-source models & pipelines. As always, we would like to thank our community for their feedback, questions, and feature requests. π
β New Features & improvements
- NEW: Introducing CamemBertForSequenceClassification annotator in Spark NLP π.
CamemBertForSequenceClassification
can load CamemBERT Models with sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for multi-class document classification tasks. This annotator is compatible with all the models trained/fine-tuned by usingCamembertForSequenceClassification
for PyTorch orTFCamembertForSequenceClassification
for TensorFlow in HuggingFace π€ - NEW: Add
AnnotatorType
validation in Spark NLPLightPipeline
. Currently, a misconfiguration ofinputCols
in an annotator in a pipeline raises an exception when usingtransform
method, but inLightPipeline
it only outputs empty values. This behavior can confuse users, this change introduces a validation that will raise an exception now inLightPipeline
too.- Add outputAnnotatorType for all annotators in Python
- Add inputAnnotatorTypes and outputAnnotatorType requirement validation for all subclasses derived from
AnnotatorApproach
andAnnotatorModel
- Adding AnnotatorType validation in
LightPipeline
- NEW: Migrate
26 notenooks
to import external Transformer models into Spark NLP. These notebooks now come with latestTensorFlow 2.11.0
andHuggingFace 4.25.1
releases. The notebooks also have TF signatures with data input types explicitly set to guarantee model sanity once imported into Spark NLP - Add validation for the number and type of columns set in
TFNerDLGraphBuilder
annotator. In efforts to avoid wrong definition of columns when using Spark NLP annotators in Python - Add more details to Alphabet error message in
EntityRuler
annotator to better guide users - Add instructions on how to resolve RocksDB incompatibilities when using Spark NLP with an M1 machine
- Welcoming new Databricks runtimes support
- 11.3
- 11.3 ML
- 11.3 GPU
- Welcoming new EMR versions support
- 6.8.0
- 6.9.0
- Refactor and implement a better error handling in ResourceDownloader. This change removes
getObjectFromS3
allowing AWS SDK to rise the correspondent error. In addition, this change also refactors ResourceDownloader to reflect the intention of each credential type on the downloader - Implement full build and test of all unit tests base on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x major releases
- UpdateUpgrade
sbt-assembly
to1.2.0
that comes with lots of performance improvements. This benefits those who are trying to package Spark NLP as a Fat JAR - Update
sbt
to1.8.0
with improvements and bug fixes, but mostly for CVEs fixes:- Updates to Coursier 2.1.0-RC1 to address GHSA-wv7w-rj2x-556x
- Updates to Ivy 2.3.0-sbt-a8f9eb5bf09d0539ea3658a2c2d4e09755b5133e to address GHSA-wv7w-rj2x-556x
- Use the new withIncludeScala in assemblyOption instead of value
π Bug Fixes
- Fix an issue with the
BigTextMatcher
Annotator, where it would not match entities with overlapping definitions. For Example, if bothlung
andlung cancer
are defined,lung
would not be matched in a given text. This was due to an abstraction error of one of the subclasses of theBigTextMatcher
during construction of the underlying data structure - Fix indexing issue for
RegexTokenizer
annotator. If the document was split into sentences, the index of the sentence inside the document was not taken into consideration for the indexes of the tokens. This would lead to further issues down the pipeline, where tokens would be filtered while unpacking them for other Annotators - Refactor the
Resolvers
object in Spark NLP's dependency to avoid the conflict with the Resolvers inside the newsbt
π Known Issues
TypedDependencyParserModel
annotator fails in Python in this release (will be fixed in 4.2.6 release next week)
Models
Spark NLP 4.2.5 comes with 400+ state-of-the-art pre-trained transformer models in many languages.
Featured Models
Spark NLP covers the following languages:
English
,Multilingual
,Afrikaans
,Afro-Asiatic languages
,Albanian
,Altaic languages
,American Sign Language
,Amharic
,Arabic
,Argentine Sign Language
,Armenian
,Artificial languages
,Atlantic-Congo languages
,Austro-Asiatic languages
,Austronesian languages
,Azerbaijani
,Baltic languages
,Bantu languages
,Basque
,Basque (family)
,Belarusian
,Bemba (Zambia)
,Bengali, Bangla
,Berber languages
,Bihari
,Bislama
,Bosnian
,Brazilian Sign Language
,Breton
,Bulgarian
,Catalan
,Caucasian languages
,Cebuano
,Celtic languages
,Central Bikol
,Chichewa, Chewa, Nyanja
,Chilean Sign Language
,Chinese
,Chuukese
,Colombian Sign Language
,Congo Swahili
,Croatian
,Cushitic languages
,Czech
,Danish
,Dholuo, Luo (Kenya and Tanzania)
,Dravidian languages
,Dutch
,East Slavic languages
,Eastern Malayo-Polynesian languages
,Efik
,Esperanto
,Estonian
,Ewe
,Fijian
,Finnish
,Finnish Sign Language
,Finno-Ugrian languages
,French
,French-based creoles and pidgins
,Ga
,Galician
,Ganda
,Georgian
,German
,Germanic languages
,Gilbertese
,Greek (modern)
,Greek languages
,Gujarati
,Gun
,Haitian, Haitian Creole
,Hausa
,Hebrew (modern)
,Hiligaynon
,Hindi
,Hiri Motu
,Hungarian
,Icelandic
,Igbo
,Iloko
,Indic languages
,Indo-European languages
,Indo-Iranian languages
,Indonesian
,Irish
,Isoko
,Isthmus Zapotec
,Italian
,Italic languages
,Japanese
,Japanese
,Kabyle
,Kalaallisut, Greenlandic
,Kannada
,Kaonde
,Kinyarwanda
,Kirundi
,Kongo
,Korean
,Kwangali
,Kwanyama, Kuanyama
,Latin
,Latvian
,Lingala
,Lithuanian
,Louisiana Creole
,Lozi
,Luba-Katanga
,Luba-Lulua
,Lunda
,Lushai
,Luvale
,Macedonian
,Malagasy
,Malay
,Malayalam
,Malayo-Polynesian languages
,Maltese
,Manx
,Marathi (MarΔαΉhΔ«)
,Marshallese
,Mexican Sign Language
,Mon-Khmer languages
,Morisyen
,Mossi
,Multiple languages
,Ndonga
,Nepali
,Niger-Kordofanian languages
,`N...
Spark NLP 4.2.4: Introducing support for GCP storage for pre-trained models, update to TensorFlow 2.7.4 with CVEs fixes, improvements, and bug fixes
π’ Overview
Spark NLP 4.2.4 π comes with new support for GCP storage
to automatically download and load models & pipelines via setting the cache_pretrained
path, update to TensorFlow 2.7.4
with security patch fixes, lots of improvements in our documentation, improvements, and bug fixes.
Do not forget to visit Models Hub with over 11400+ free and open-source models & pipelines. As always, we would like to thank our community for their feedback, questions, and feature requests. π
β New Features & improvements
- Introducing support for
GCP storage
to automatically download and load pre-trained models/pipelines fromcache_pretrained
directory - Update to
TensorFlow 2.7.4
with bug and CVEs fixes. Details about bugs and CVEs fixes: 417e2a1 - Improve error handling while importing external TensorFlow models into Spark NLP
- Improve error messages when importing external models from remote storages like DBFS, S3, and HDFS
- Update documentation on how to use
testDataset
param in NerDLApproach, ClassifierDLApproach, MultiClassifierDLApproach, and SentimentDLApproach - Update installation instructions for the
Apple M1
chip - Add support for future decoder-encoder models with 2 separated models
π Bug Fixes
- Add missing
setPreservePosition
in NerConverter - Add missing
inputAnnotatorTypes
toBigTextMatcher
,ViveknSentimentModel
, andNerConverter
annotators - Fix all wrong example codes provided for
LemmatizerModel
in Models Hub - Fix the
t5_grammar_error_corrector
model to be compatible with Spark NLP 4.0+ - Fix provided notebook to import
Longformer
models from Hugging Face into Spark NLP
π New Notebooks
Spark NLP | Notebooks | Colab |
---|---|---|
Spark NLP Conf | Dowbload and Load Model from GCP Storage | |
LongformerEmbeddings | HuggingFace in Spark NLP - Longformer |
- You can visit Import Transformers in Spark NLP
- You can visit Spark NLP Workshop for 100+ examples
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.4
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.4
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.4
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.4
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.4
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.4</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.4</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.4</version>
</dependency>
spark-nlp-aarch64:
<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.4</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.4.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.4.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.4.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-4.2.4.jar
What's Changed
- release note v4.2.2 by @Cabir40 in #13091
- added languages by @ahmedlone127 in #13097
- [skip ci] Create PR 4.2.2-healthcare-docs-8fde8ce2327dce2fb89db1742eec8ca121eee0de-3 by @jsl-builder in #13084
- FEATURE NMH-139: Add annotator to existing model [skip-test] by @KshitizGIT in #13096
- Add Visual NLP 4.2 to compatible versions in models.json by @pabla in #13099
- Add new demos 25 by @agsfer in #13100
- Docs/alab 4.3.0 by @diatrambitas in #13104
- Added content for installation in OpenShift by @suvrat-joshi in #13105
- Update subtabs by @agsfer in #13110
- Release Notes Updated by @Cabir40 in #13111
- Updated old hc snippets by @ArshaanNazir in #13092
- Added content for healthcare nlp integration by @suvrat-joshi in #13115
- Added some content for troubleshooting section by @suvrat-joshi in #13116
- Docs/alab 2479 add content for model testing page by @rpranab in #13114
- Update oncology.md by @agsfer in #13146
- SPARKNLP-656 & SPARKNLP-657: Updated Documentation by @DevinTDHa in #13108
- SPARKNLP-658 Update EngineError message by @maziyarpanahi in #13109
- SPARKNLP-661: Add missing setPreservePosition in NerConverter by @DevinTDHa in #13112
- fixed Wrong Example code provided for LemmatizerModel #13125 by @ahmedlone127 in #13126
- SPARKNLP-620 Provide GCP Support for Cache Folder by @danilojsl in #13141
- SPARKNLP-669 Adding missing inputAnnotatorTypes by @danilojsl in #13144
- SPARKNLP-665 Updating to TensorFlow 2.7.4 by @maziyarpanahi in #13152
- SPARKNLP-671 incorporate the exception into the error message by @maziyarpanahi in #13153
- Models hub by @maziyarpanahi in #13160
- Release/424 release candidate by @maziyarpanahi in #13163
Full Changelog: 4.2.3...4.2.4
Spark NLP 4.2.1: Over 230 state-of-the-art Transformer Vision (ViT) pretrained pipelines, new multi-lingual support for Word Segmentation, add LightPipeline support to Automatic Speech Recognition pipelines, support for processed audio files in type Double for Wav2Vec2, and bug fixes
π’ Overview
Spark NLP 4.2.1 π comes with a new multi-lingual support for Word Segmentation mostly used for (but not limited to) Chinese, Japanese, Korean, and so on, adding Automatic Speech Recognition (ASR) pipelines to LightPipeline arsenal for faster computation of smaller datasets without Apache Spark (e.g. RESTful API use case), adding support for processed audio files in type of Double in addition to Float for Wav2Vec2
, over 230+ state-of-the-art Transformer Vision (ViT) pretrained pipelines for 1-line Image Classification, and bug fixes.
Do not forget to visit Models Hub with over 11400+ free and open-source models & pipelines. As always, we would like to thank our community for their feedback, questions, and feature requests. π
β New Features & improvements
- NEW: Support for multi-lingual WordSegmenter. Add
enableRegexTokenizer
feature in WordSegmenter to support word segmentation within mixed and multi-lingual content #12854 - NEW: Add support for Audio/ASR (Wav2Vec2) support to LightPipeline #12895
- NEW: Add support for Double type in addition to Float type to AudioAssembler annotator #12904
- Improve error handling in fullAnnotateImage for LightPipeline #12868
- Add
SpanBertCoref
annotator to all docs #12889
Bug Fixes
- Fix feeding
fullAnnotate
in Lightpipeline with a list that started to fail in 4.2.0 release - Fix exception in ContextSpellCheckerModel when updateVocabClass is used with append set to true #12875
- Fix exception in
Chunker
annotator #12901
π New Notebooks
Spark NLP | Notebooks | Colab |
---|---|---|
SpanBertCorefModel | Coreference Resolution with SpanBertCorefModel | |
WordSegmenter | Train and inference multi-lingual Word Segmenter |
- You can visit Import Transformers in Spark NLP
- You can visit Spark NLP Workshop for 100+ examples
Models
Spark NLP 4.2.1 comes with 230+ state-of-the-art pre-trained Transformer Vision (ViT) pipeline:
Featured Pipelines
Pipeline | Name | Lang |
---|---|---|
PretrainedPipeline | pipeline_image_classifier_vit_base_patch16_224_finetuned_eurosat | en |
PretrainedPipeline | pipeline_image_classifier_vit_base_beans_demo_v5 | en |
PretrainedPipeline | pipeline_image_classifier_vit_animal_classifier_huggingface | en |
PretrainedPipeline | pipeline_image_classifier_vit_Infrastructures | en |
PretrainedPipeline | pipeline_image_classifier_vit_blocks | en |
PretrainedPipeline | pipeline_image_classifier_vit_beer_whisky_wine_detection | en |
PretrainedPipeline | pipeline_image_classifier_vit_base_xray_pneumonia | en |
PretrainedPipeline | pipeline_image_classifier_vit_baseball_stadium_foods | en |
PretrainedPipeline | pipeline_image_classifier_vit_dog_vs_chicken | en |
Check 460+ Transformer Vision (ViT) models & pipelines for Models Hub - Image Classification
Spark NLP covers the following languages:
English
,Multilingual
,Afrikaans
,Afro-Asiatic languages
,Albanian
,Altaic languages
,American Sign Language
,Amharic
,Arabic
,Argentine Sign Language
,Armenian
,Artificial languages
,Atlantic-Congo languages
,Austro-Asiatic languages
,Austronesian languages
,Azerbaijani
,Baltic languages
,Bantu languages
,Basque
,Basque (family)
,Belarusian
,Bemba (Zambia)
,Bengali, Bangla
,Berber languages
,Bihari
,Bislama
,Bosnian
,Brazilian Sign Language
,Breton
,Bulgarian
,Catalan
,Caucasian languages
,Cebuano
,Celtic languages
,Central Bikol
,Chichewa, Chewa, Nyanja
,Chilean Sign Language
,Chinese
,Chuukese
,Colombian Sign Language
,Congo Swahili
,Croatian
,Cushitic languages
,Czech
,Danish
,Dholuo, Luo (Kenya and Tanzania)
,Dravidian languages
,Dutch
,East Slavic languages
,Eastern Malayo-Polynesian languages
,Efik
,Esperanto
,Estonian
,Ewe
,Fijian
,Finnish
,Finnish Sign Language
,Finno-Ugrian languages
,French
,French-based creoles and pidgins
,Ga
,Galician
,Ganda
,Georgian
,German
,Germanic languages
,Gilbertese
,Greek (modern)
,Greek languages
,Gujarati
,Gun
,Haitian, Haitian Creole
,Hausa
,Hebrew (modern)
,Hiligaynon
,Hindi
,Hiri Motu
,Hungarian
,Icelandic
,Igbo
,Iloko
,Indic languages
,Indo-European languages
,Indo-Iranian languages
,Indonesian
,Irish
,Isoko
,Isthmus Zapotec
,Italian
,Italic languages
,Japanese
,Japanese
,Kabyle
,Kalaallisut, Greenlandic
,Kannada
,Kaonde
,Kinyarwanda
,Kirundi
,Kongo
,Korean
,Kwangali
,Kwanyama, Kuanyama
,Latin
,Latvian
,Lingala
,Lithuanian
,Louisiana Creole
,Lozi
,Luba-Katanga
,Luba-Lulua
,Lunda
,Lushai
,Luvale
,Macedonian
,Malagasy
,Malay
,Malayalam
,Malayo-Polynesian languages
,Maltese
,Manx
,Marathi (MarΔαΉhΔ«)
,Marshallese
,Mexican Sign Language
,Mon-Khmer languages
,Morisyen
,Mossi
,Multiple languages
,Ndonga
,Nepali
,Niger-Kordofanian languages
,Nigerian Pidgin
,Niuean
,North Germanic languages
,Northern Sotho, Pedi, Sepedi
,Norwegian
,Norwegian BokmΓ₯l
,Norwegian Nynorsk
,Nyaneka
,Oromo
,Pangasinan
,Papiamento
,Persian (Farsi)
,Peruvian Sign Language
,Philippine languages
,Pijin
,Pohnpeian
,Polish
,Portuguese
,Portuguese-based creoles and pidgins
,Punjabi (Eastern)
,Romance languages
,Romanian
,Rundi
,Russian
,Ruund
,Salishan languages
,Samoan
,San Salvador Kongo
,Sango
,Semitic languages
,Serbo-Croatian
,Seselwa Creole French
,Shona
,Sindhi
,Sino-Tibetan languages
,Slavic languages
,Slovak
,Slovene
,Somali
,South Caucasian languages
,South Slavic languages
,Southern Sotho
,Spanish
,Spanish Sign Language
,Sranan Tongo
,Swahili
,Swati
,Swedish
,Tagalog
,Tahitian
,Tai
,Tamil
,Telugu
,Tetela
,Tetun Dili
,Thai
,Tigrinya
,Tiv
,Tok Pisin
,Tonga (Tonga Islands)
,Tonga (Zambia)
,Tsonga
,Tswana
,Tumbuka
,Turkic languages
,Turkish
,Tuvalu
,Tzotzil
,Ukrainian
,Umbundu
,Uralic languages
,Urdu
,Venda
,Venezuelan Sign Language
,Vietnamese
,Wallisian
,Walloon
,Waray (Philippines)
,Welsh
,West Germanic languages
,West Slavic languages
,Western Malayo-Polynesian languages
,Wolaitta, Wolaytta
,Wolof
,Xhosa
,Yapese
,Yiddish
,Yoruba
,Yucatec Maya, Yucateco
,Zande (individual language)
,Zulu
The complete list of all 11000+ models & pipelines in 230+ languages is available on Models Hub
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databri...
Spark NLP 4.2.3: Improved CoNLLGenerator annotator, new rules parameter in RegexMatcher, new IAnnotation feature for LightPipeline in Scala, and bug fixes
π’ Overview
Spark NLP 4.2.3 π comes with new improvements to the CoNLLGenerator
annotator, a new way to pass rules to the RegexMatcher
annotator, unifying control over a number of columns in setInputCols
between the Scala and Python, new documentation for our new IAnnotation
feature for those who are using Spark NLP in Scala, and bug fixes.
Do not forget to visit Models Hub with over 11400+ free and open-source models & pipelines. As always, we would like to thank our community for their feedback, questions, and feature requests. π
β New Features & improvements
- Adding metadata sentence key parameter in order to select which metadata field to use as a sentence for the
CoNLLGenerator
annotator - Include escaping in the
CoNLLGenerator
annotator when writing to CSV and preserve special char token - Add rules and delimiter parameters to RegexMatcher annotator to support string as input in addition to a file
regexMatcher = RegexMatcher() \
.setRules(["\\d{4}\\/\\d\\d\\/\\d\\d,date", "\\d{2}\\/\\d\\d\\/\\d\\d,short_date"]) \
.setDelimiter(",") \
.setInputCols(["sentence"]) \
.setOutputCol("regex") \
.setStrategy("MATCH_ALL")
- Implement a new control over a number of accepted columns in Python. This will sync the behavior between Scala and Python where the user sets more columns than allowed inside setInputCols while using Spark NLP in Python
- Add documentation for the new
IAnnotation
feature for Scala users
Bug Fixes
- Fix
NotSerializableException
when theWordEmbeddings
annotator is used over the K8s cluster whilesetEnableInMemoryStorage
is set totrue
- Fix a bug in the
RegexTokenizer
annotator when it outputs the wrong indexes if the pattern includes splits that are not followed by a space - Fix training module failing on EMR due to a bad Apache Spark version detection. The use of the following classes was fixed on EMR:
CoNLL()
,CoNLLU()
,POS()
, andPubTator()
- Fix a bug in the
CoNLLGenerator
annotator where the token has non-int metadata - Fix the wrong
SentencePiece
model's name required forDeBertaForQuestionAnswering
andDeBertaEmbeddings
when importing models - Fix
NaNs
result in some ViTForImageClassification models/pipelines
π New Notebooks
- You can visit Import Transformers in Spark NLP
- You can visit Spark NLP Workshop for 100+ examples
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.3
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.3
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.3
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.3
AArch64
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.3
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.3</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.3</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.3</version>
</dependency>
spark-nlp-aarch64:
<!-- https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64 -->
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.3</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.3.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.3.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.3.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-aarch64-assembly-4.2.3.jar
What's Changed
- Models hub legal by @josejuanmartinez in #12999
- Models hub finance by @josejuanmartinez in #13000
- Embed React and ReactDOM instead of packages from unpkg [skip test] by @pabla in #13002
- updated OCR release notes by @albertoandreottiATgmail in #13010
- Compat tables by @albertoandreottiATgmail in #13012
- Updating s3 link for dependency_conllu model by @luca-martial in #13016
- Add new demos by @agsfer in #13020
- Add new demos 24 by @agsfer in #13022
- Updated legre_contract_doc_parties_en and finre_work_experience_en mo⦠by @bunyamin-polat in #13023
- Docs/alab update documentation 410 by @diatrambitas in #13024
- Doc fix scala and open source by @ArshaanNazir in #13008
- Update 2022-10-22-finclf_bert_sentiment_analysis_lt.md by @gadde5300 in #13026
- add alab image by @agsfer in #13030
- Docs/alab update documentation 410 by @diatrambitas in #13034
- SPARKNLP 643 detecting spark version in a safer way by @maziyarpanahi in #13035
- Docs/alab update documentation 410 by @diatrambitas in #13041
- Added content for exporting visual NER project ad updated few other sections by @suvrat-joshi in #13042
- Bump model card Spark NLP HC version to 4.2.1 by @luca-martial in #13027
- SPARKNLP-642: Fix indexing issue for regex splits without space by @DevinTDHa in #13032
- Update ALAB by @agsfer in #13045
- Serializable Issue K8s Word Embeddings by @danilojsl in #13001
- FEATURE NMH-133: Rename products in search [skip-test] by @KshitizGIT in #12998
- Fix sorting in the versions drop-down [skip test] by @pabla in #13049
- Add tooltips for Unidirectional and Bidirectional models [skip test] by @pabla in #13064
- FEATURE NMH-134: Rebranding products [skip-test] by @KshitizGIT in #13065
- Adding Control for Annotators with One Column by @danilojsl in #12997
- Update 2022-10-18-legre_confidentiality_en.md by @gadde5300 in #13059
- Update 2022-09-28-legre_indemnifications_en.md by @gadde5300 in #13058
- Fix a bug in Vision Transformer annotator that results in NaNs for some models by @ahmedlone127 in #13048
- Bug fix and enhancements for CoNLLGenerator annotator by @maziyarpanahi in #13053
- SPARKNLP-621: Add string support to RegexMatcher in addition to a file by @DevinTDHa in #13060
- Add ScalaDoc for IAnnotation by @danilojsl in #130...
Spark NLP 4.2.2: Support DBFS, HDFS, and S3 for importing external models, unifying LightPipeline APIs across supported languages for Image Classification, new fullAnnotateImage for Scala, new fullAnnotateImageJava for Java, support LightPipeline for QuestionAnswering pre-trained pipelines, and bug fixes
π’ Overview
Spark NLP 4.2.2 π comes with support for DBFS, HDFS, and S3 in addition to local file systems when you are importing external models from TF Hub and Hugging Face, unifying LightPipeline APIs across Scala, Java, and Python languages for Image Classification, the new fullAnnotateImage for Scala, the new fullAnnotateImageJava for Java, the support for LightPipeline for QuestionAnswering pre-trained pipelines, and bug fixes.
Do not forget to visit Models Hub with over 11400+ free and open-source models & pipelines. As always, we would like to thank our community for their feedback, questions, and feature requests. π
β New Features & improvements
- Add support for importing TensorFlow SavedModel from remote storages like DBFS, S3, and HDFS. From this release, you can import models saved from TF Hub and HuggingFace on a remote storage
- Add support for
fullAnnotate
inLightPipeline
for the path of images in Scala - Add
fullAnnotate
method inPretrainedPipeline
for Scala - Add
fullAnnotateJava
method inPretrainedPipeline
for Java - Add
fullAnnotateImage
toPretrainedPipeline
for Scala - Add
fullAnnotateImageJava
toPretrainedPipeline
for Java - Add support for Question Answering in
fullAnnotate
method inPretrainedPipeline
- Add
Predicted Entities
to all Vision Transformers (ViT) models and pipelines
Bug Fixes
- Unify the
annotatorType
name in Python and Scala for Spark schema in Annotation, AnnotationImage, and AnnotationAudio - Fix missing indexes in the
RecursiveTokenizer
annotator affecting downstream NLP tasks in the pipeline
π New Notebooks
Spark NLP | Notebooks | Colab |
---|---|---|
WordSegmenter | Import External SavedModel From Remote |
- You can visit Import Transformers in Spark NLP
- You can visit Spark NLP Workshop for 100+ examples
π Documentation
- TF Hub & HuggingFace to Spark NLP
- Models Hub with new models
- Spark NLP documentation
- Spark NLP Scala APIs
- Spark NLP Python APIs
- Spark NLP Workshop notebooks
- Spark NLP publications
- Spark NLP in Action
- Spark NLP training certification notebooks for Google Colab and Databricks
- Spark NLP Display for visualization of different types of annotations
- Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
Installation
Python
#PyPI
pip install spark-nlp==4.2.2
Spark Packages
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x (Scala 2.12):
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
GPU
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.2
M1
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.2
Maven
spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.2</version>
</dependency>
spark-nlp-gpu:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.2</version>
</dependency>
spark-nlp-m1:
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.2</version>
</dependency>
FAT JARs
-
CPU on Apache Spark 3.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-assembly-4.2.2.jar
-
GPU on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-gpu-assembly-4.2.2.jar
-
M1 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.2.jar
-
AArch64 on Apache Spark 3.0.x/3.1.x/3.2.x/3.3.x: https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/jars/spark-nlp-m1-assembly-4.2.2.jar
What's Changed
Contributors
@galiph @agsfer @pabla @josejuanmartinez @Cabir40 @maziyarpanahi @Meryem1425 @danilojsl @jsl-builder @jsl-models @ahmedlone127 @DevinTDHa @jdobes-cz @Damla-Gurbaz @Mary-Sci
New Contributors
Full Changelog: 4.2.1...4.2.2
Spark NLP 4.2.0: Wav2Vec2 for Automatic Speech Recognition (ASR), TAPAS for Table Question Answering, CamemBERT for Token Classification, new evaluation metrics for external datasets in all classifiers, much faster EntityRuler, over 3000+ state-of-the-art multi-lingual models & pipelines, and many more!
π’ Overview
For the first time ever we are delighted to announce Automatic Speech Recognition (ASR) support in Spark NLP by using state-of-the-art Wav2Vec2 models at scale π. This release also comes with Table Question Answering by TAPAS, CamemBERT for Token Classification, support for an external test dataset during training of all classifiers, much faster EntityRuler, 3000+ state-of-the-art models, and other enhancements and bug fixes!
We are also celebrating crossing 11000+ free and open-source models & pipelines in our Models Hub. π As always, we would like to thank our community for their feedback, questions, and feature requests.
β New Features & improvements
- NEW: Introducing Wav2Vec2ForCTC annotator in Spark NLP π.
Wav2Vec2ForCTC
can loadWav2Vec2
models for the Automatic Speech Recognition (ASR) task. Wav2Vec2 is a multi-modal model, that combines speech and text. It's the first multi-modal model of its kind we welcome in Spark NLP. This annotator is compatible with all the models trained/fine-tuned by usingWav2Vec2ForCTC
for PyTorch orTFWav2Vec2ForCTC
for TensorFlow models in HuggingFace π€ (#12767)
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
- NEW: Introducing TapasForQuestionAnswering annotator in Spark NLP π.
TapasForQuestionAnswering
can load TAPAS Models with a cell selection head and optional aggregation head on top for question-answering tasks on tables (linear layers on top of the hidden-states output to compute logits and optional logits_aggregation), e.g. for SQA, WTQ or WikiSQL-supervised tasks. TAPAS is a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data. This annotator is compatible with all the models trained/fine-tuned by usingTapasForQuestionAnswering
for PyTorch orTFTapasForQuestionAnswering
for TensorFlow models in HuggingFace π€
- NEW: Introducing CamemBertForTokenClassification annotator in Spark NLP π.
CamemBertForTokenClassification
can load CamemBERT Models with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This annotator is compatible with all the models trained/fine-tuned by usingCamembertForTokenClassification
for PyTorch orTFCamembertForTokenClassification
for TensorFlow in HuggingFace π€
(#12752) - Implementing
setTestDataset
to evaluate metrics on an external dataset during training of Text Classifiers in Spark NLP. This feature is similar to NerDLApproach where metrics are calculated on each Epoch and have been added to the following multi-class/multi-label text classifier annotators:ClassifierDLApproach
,SentimentDLApproach
, andMultiClassifierDLApproach
(#12796) - Refactoring and improving
EntityRuler
annotator inference to up to 24x faster especially when used with a long list of labels/entities. We speed up the inference process by implementing the Aho-Corasick algorithm to match patterns in a string. This requires the following changes when usingEntityRuler
#12634 - Add support for S3 storage in the
cache_folder
where models are downloaded, extracted, and loaded from. Previously, we only supported all local file systems, HDFS, and DBFS. This new feature is especially useful for users on Kubernetes clusters with no access to HDFS or any other distributed file systems (#12707) - Implementing
lookaround
functionalities inDocumentNormalizer
annotator. Currently,DocumentNormalizer
has bothlookahead
andlookbehind
functionalities. To extend support for more complex normalizations, especially within the clinical text we are introducing thelookaround
feature (#12735) - Implementing
setReplaceEntities
param toNerOverwriter
annotator to replace all the NER labels (entities) with the given new labels (entities) (#12745)
Bug Fixes
- Fix a bug in generating the NerDL graph by using TF v2. The previous graph generated by the
TFGraphBuilder
annotator resulted in an exception when the length of the sequence was 1. This issue has been resolved and the new graphs created byTFGraphBuilder
won't have this issue anymore (#12636) - Fix a bug introduced in the 4.0.0 release between Transformer-based Word Embeddings annotators. In the 4.0.0 release, the following annotators were migrated to BatchAnnotate to improve their performance, especially on GPU. However, a bug was introduced in sentence indices which when it is combined with SentenceEmbeddings for Text Classifications tasks (ClassifierDLApproach, SentimentDLApproach, and ClassifierDLApproach) resulted in low accuracy: AlbertEmbeddings, CamemBertEmbeddings, DeBertaEmbeddings, DistilBertEmbeddings, LongformerEmbeddings, RoBertaEmbeddings, XlmRoBertaEmbeddings, and XlnetEmbeddings (#12641)
- Add support for a list of questions and context in LightPipline. Previously, only one context and question at a time were supported in LightPipeline for Question Answering annotators. We have added support to
fullAnnotate
andannotate
to receive two lists of questions and contexts (#12653) - Fix division by zero exception in the
GPT2Transformer
annotator when thesetDoSample
param was set to true (#12661) - Fix
AttributeError
when PretrainedPipeline is used in Python with ImageAssembler as one of the stages (#12813)
π New Notebooks
Spark NLP | Notebooks | Colab |
---|---|---|
Wav2Vec2ForCTC | Automatic Speech Recognition in Spark NLP | |
ViTForImageClassification | HuggingFace in Spark NLP - ViTForImageClassification | |
CamemBertForTokenClassification | HuggingFace in Spark NLP - CamemBertForTokenClassification | |
ClassifierDLApproach | ClassifierDL Train and Evaluate | |
MultiClassifierDLApproach | MultiClassifierDL Train and Evaluate | |
SentimentDLApproach | SentimentDL Train and Evaluate | |
Pretrained/cache_folder | Download & Load Models From S3 | |
EntityRuler | [EntityRuler](https://github.com/JohnSnowLabs/spark-nlp-workshop/... |