practNLPTools-lite

This project is a fork of biplab-iitb

Warning

CLI is only for example purpose don’t use for long running jobs.

Get the very old code in devbranch or prior stable version oldVersion.

Author python_version HitCount

Build Status - this built might take you to practNLPTools which is testing ground for this repository so don’t worry.

Practical Natural Language Processing Tools for Humans. practNLPTools is a pythonic library over SENNA and Stanford Dependency Extractor.

name

status

PyPi

pypi status

travis

travis status

Documentation

Documentation Status

dependency

Updates

blocker Pyupbot

Python 3

FOSSA

FOSSA Status

Note

After version 0.3.0+ pntl should able to store the result into database for later usage if needed by installing below dependency.

pip install git+https://github.com/jawahar273/snowbase.git

QuickStart

Downlarding Stanford Parser JAR

To downlard the stanford-parser from github automatically and placing them inside the install direction.

pntl -I true
# downlards required file from github.

Running Predefine Examples Sentences

To run predefine example in batch mode(which has more than one list of examples).

pntl -SE home/user/senna -B true

Example

Batch mode means listed sentences.

..code:

# Example structure for predefine
# Sentences in the code.

sentences = [
    "This is line 1",
    "This is line 2",

]

To run predefine example in non batch mode.

pntl -SE home/user/senna

Running user given sentence

To run user given example using -S is

pntl -SE home/user/senna -S 'I am gonna make him an offer he can not refuse.'

Functionality

  • Semantic Role Labeling.

  • Syntactic Parsing.

  • Part of Speech Tagging (POS Tagging).

  • Named Entity Recognisation (NER).

  • Dependency Parsing.

  • Shallow Chunking.

  • Skip-gram(in-case).

  • find the senna path if is install in the system.

  • stanford parser and depPaser file into installed direction.

Future work

  • tag2file(new)

  • creating depParser for corresponding os environment

  • custome input format for stanford parser insted of tree format

Features

  1. Fast: SENNA is written is C. So it is Fast.

  2. We use only dependency Extractor Component of Stanford Parser, which takes in Syntactic Parse from SENNA and applies dependency Extraction. So there is no need to load parsing models for Stanford Parser, which takes time.

  3. Easy to use.

  4. Platform Supported - Windows, Linux and Mac

  5. Automatic finds stanford parsing jar if it is present in install path[pntl].

Note

SENNA pipeline has a fixed maximum size of the sentences that it can read. By default it is 1024 token/sentence. If you have larger sentences, changing the MAX_SENTENCE_SIZE value in SENNA_main.c should beconsidered and your system specific binary should be rebuilt. Otherwise this could introduce misalignment errors.

Installation

Requires:

A computer with 500mb memory, Java Runtime Environment (1.7 preferably, works with 1.6 too, but didnt test.) installed and python.

Linux:

run:

sudo python setup.py install

windows:

run this commands as administrator:

python setup.py install

Bench Mark comparsion

By using the time command in ubuntu on running the testsrl.py on this link and along with tools.py on pntl

pntl

NLTK-senna

at fist run

real 0m1.674s

real 0m2.484s

user 0m1.564s

user 0m1.868s

sys 0m0.228s

sys 0m0.524s

at second run

real 0m1.245s

real 0m3.359s

user 0m1.560s

user 0m2.016s

sys 0m0.152s

sys 0m1.168s

Note

This benchmark may diffrent from system to sytem. The result produced here is from ububtu 4Gb RAM and i3 process.

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.