提交 af685fd6 编写于 作者: H Hui Zhang

mmseg with pybind11

上级 5ddc79a8
build/
dist/
pymmseg.egg-info/
pyMMSeg-cpp, a high performance Chinese word segmentation utility.
\ No newline at end of file
include README DESCRIPTION bin/pymmseg
recursive-include mmseg *
pymmseg-cpp
* by pluskid & kronuz
* http://github.com/pluskid/pymmseg-cpp
# DESCRIPTION:
pymmseg-cpp is a Python interface to rmmseg-cpp. rmmseg-cpp is a high
performance Chinese word segmentation utility for Ruby. However, the
core part is written in C++ independent of Ruby. So I decide to write
a Python interface for it in order to use it in my Python project.
# FEATURES:
* Runs fast and the memory consumption is small.
* Support user customized dictionaries.
* UTF-8 and Unicode encoding is supported.
# SYNOPSIS:
## A simple script
pymmseg-cpp provides a simple script (bin/pymmseg), which can read the
text from standard input and print the segmented result to standard
output. Try pymmseg -h for help on the options.
## As a Python module
To use pymmseg-cpp in normal Python program, first import the module and
init by loading the dictionaries:
```python
import mmseg
mmseg.Dictionary.load_dictionaries()
```
If you want to load your own customized dictionaries, please customize
`mmseg.Dictionary.dictionaries` before calling load_dictionaries.
Then create an Algorithm iterable object and iterate through it:
```python
algor = mmseg.Algorithm(text)
for tok in algor:
print '%s [%d..%d]' % (tok.text, tok.start, tok.end)
```
## Customize the dictionary
You can also load your own character dictionary or word dictionary in the
following way:
```python
import mmseg
mmseg.Dictionary.load_words('customize_words.dic')
mmseg.Dictionary.load_chars('customize_chars.dic')
```
### Format for chars.dic
* each line contains the freq of the character, a space, and then the character
### Format for words.dic
* each line contains the length of the word, a space, and then the word
### WARNING
* The length of the word means number of characters in the word, not number of bytes
* The format of words.dic is different from chars.dic, see above
* There should be a newline at the end of all the dict file
# REQUIREMENTS:
* python 2.5+
* g++
# INSTALLATION:
pymmseg-cpp should be installed using pip:
```
pip install pymmseg (instead of pymmseg-cpp, see below)
```
or setuptools:
```
easy_install pymmseg
```
You can also download the latest code from github and build it yourself:
```
python setup.py build
```
Then copy the pymmseg directory to your Python's package path. e.g.
`/usr/lib/python2.5/site-packages/`. Now you can use pymmseg in your
application.
# Alternative Version
There is a package called `pymmseg-cpp` in PyPI. That is a modified version by Shenpeng Zhang (zsp007@gmail.com) based on an earlier version of this project. The version number in those two packages are independent. The naming is a little confusing, and unfortunately both of us don't have enough time to get the changes merged properly. I'll list the known differences here so that you can choose which version to use:
* pymmseg is using Python native extension code (instead of the original interface based on ctypes) with the help of Kronuz, who claimed ~400% performance boost.
* pymmseg-cpp has a refined built-in dictionary file (EDIT: Now also incorporated in pymmseg)
* pymmseg-cpp ships with some helper functions that might be convenient when using with xapian
# CONTRIBUTIONS:
Python native extension code contributed by German M. Bravo (Kronuz)
for a ~400% performance boost under Python.
# LICENSE:
(The MIT License)
Copyright (c) 2012
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#!/usr/bin/env python3
import sys
import pstats
import cProfile
from io import StringIO
import getopt
import os
from os.path import dirname, join
import mmseg
class Dictionary():
dictionaries = (
('chars',
os.path.join(os.path.dirname(__file__), '../mmseg/data', 'chars.dic')),
('words',
os.path.join(os.path.dirname(__file__), '../mmseg/data', 'words.dic')),
)
@staticmethod
def load_dictionaries():
for t, d in Dictionary.dictionaries:
if t == 'chars':
if not mmseg.load_chars(d):
raise IOError("Cannot open '%s'" % d)
elif t == 'words':
if not mmseg.load_words(d):
raise IOError("Cannot open '%s'" % d)
mmseg.dict_load_defaults = Dictionary.load_dictionaries
class Algorithm(object):
def __init__(self, text: str):
"""\
Create an Algorithm instance to segment text.
"""
self.text = text.encode('utf8')
# add a reference to prevent the string buffer from
# being GC-ed
self.algor = mmseg.Algorithm(text)
self.destroied = False
def __iter__(self):
"""\
Iterate through all tokens. Note the iteration has
side-effect: an Algorithm object can only be iterated
once.
"""
while True:
tk = self.next_token()
if tk is None:
raise StopIteration
yield tk
def next_token(self):
"""\
Get next token. When no token available, return None.
"""
if self.destroied:
return None
tk = self.algor.next_token()
if tk.length == 0:
# no token available, the algorithm object
# can be destroied
self._destroy()
return None
else:
return tk
def _destroy(self):
if not self.destroied:
self.destroied = True
def __del__(self):
self._destroy()
def profile(fn):
def wrapper(*args, **kwargs):
profiler = cProfile.Profile()
stream = StringIO()
profiler.enable()
try:
res = fn(*args, **kwargs)
finally:
profiler.disable()
stats = pstats.Stats(profiler, stream=stream)
stats.sort_stats('time')
print("", file=stream)
print("=" * 100, file=stream)
print("Stats:", file=stream)
stats.print_stats()
print("=" * 100, file=stream)
print("Callers:", file=stream)
stats.print_callers()
print("=" * 100, file=stream)
print("Callees:", file=stream)
stats.print_callees()
print(stream.getvalue(), file=sys.stderr)
stream.close()
return res
return wrapper
def print_usage():
print("""
mmseg Segment Chinese text. Read from stdin and print to stdout.
Options:
-h
--help Print this message
-s
--separator Select the separator of the segmented text. Default is space.
""")
sys.exit(0)
separator = " "
optlst, args = getopt.getopt(sys.argv[1:], 'hs:')
for opt, val in optlst:
if opt == '-h':
print_usage()
elif opt == '-s':
separator = val
# load default dictionaries
mmseg.dict_load_defaults()
def process_tokens(stdin, separator):
ret = ''
first = True
algor = Algorithm(stdin)
try:
for tk in algor:
if not first:
ret += separator
ret += tk.text
first = False
except RuntimeError:
pass
return ret
sys.stdout.write(process_tokens(sys.stdin.read(), separator))
sys.stdout.write('\n')
此差异已折叠。
此差异已折叠。
#include <cassert>
#include <cctype>
#include <cstdio>
#include <iostream>
#include "algor.h"
#include "rules.h"
using namespace std;
namespace rmmseg {
Token Algorithm::next_token() {
do {
if (m_pos >= m_text_length) return Token(NULL, 0);
Token tk(NULL, 0);
int len = next_char();
if (len == 1)
tk = get_basic_latin_word();
else
tk = get_cjk_word(len);
if (tk.length > 0) return tk;
} while (true);
}
Token Algorithm::get_basic_latin_word() {
int len = 1;
int start, end;
// Skip pre-word whitespaces and punctuations
while (m_pos < m_text_length) {
if (len > 1) break;
if (isalnum(m_text[m_pos])) break;
m_pos++;
len = next_char();
}
start = m_pos;
while (m_pos < m_text_length) {
if (len > 1) break;
if (!isalnum(m_text[m_pos])) break;
m_pos++;
len = next_char();
}
end = m_pos;
// Skip post-word whitespaces and punctuations
while (m_pos < m_text_length) {
if (len > 1) break;
if (isalnum(m_text[m_pos])) break;
m_pos++;
len = next_char();
}
auto t = Token(m_text + start, end - start);
return t;
}
Token Algorithm::get_cjk_word(int len) {
vector<Chunk> chunks = create_chunks();
if (chunks.size() > 1) mm_filter(chunks);
if (chunks.size() > 1) lawl_filter(chunks);
if (chunks.size() > 1) svwl_filter(chunks);
if (chunks.size() > 1) lsdmfocw_filter(chunks);
if (chunks.size() < 1) return Token(NULL, 0);
Token token(m_text + m_pos, chunks[0].words[0]->nbytes);
m_pos += chunks[0].words[0]->nbytes;
return token;
}
vector<Chunk> Algorithm::create_chunks() {
vector<Chunk> chunks;
Chunk chunk;
Word *w1, *w2, *w3;
int orig_pos = m_pos;
typedef vector<Word *> vec_t;
typedef vec_t::iterator it_t;
vec_t words1 = find_match_words();
for (it_t i1 = words1.begin(); i1 != words1.end(); ++i1) {
w1 = *i1;
chunk.words[0] = w1;
m_pos += w1->nbytes;
if (m_pos < m_text_length) {
vec_t words2 = find_match_words();
for (it_t i2 = words2.begin(); i2 != words2.end(); ++i2) {
w2 = *i2;
chunk.words[1] = w2;
m_pos += w2->nbytes;
if (m_pos < m_text_length) {
vec_t words3 = find_match_words();
for (it_t i3 = words3.begin(); i3 != words3.end(); ++i3) {
w3 = *i3;
if (w3->length == -1) // tmp word
{
chunk.n = 2;
} else {
chunk.n = 3;
chunk.words[2] = w3;
}
chunks.push_back(chunk);
}
} else if (m_pos == m_text_length) {
chunk.n = 2;
chunks.push_back(chunk);
}
m_pos -= w2->nbytes;
}
} else if (m_pos == m_text_length) {
chunk.n = 1;
chunks.push_back(chunk);
}
m_pos -= w1->nbytes;
}
m_pos = orig_pos;
return chunks;
}
int Algorithm::next_char() {
// ONLY for UTF-8
int ret = 1;
unsigned char ch = m_text[m_pos];
if (ch >= 0xC0 && ch <= 0xDF) {
ret = min(2, m_text_length - m_pos);
}
if (ch >= 0xE0 && ch <= 0xEF) {
ret = min(3, m_text_length - m_pos);
}
return ret;
}
vector<Word *> Algorithm::find_match_words() {
for (int i = 0; i < match_cache_size; ++i)
if (m_match_cache[i].first == m_pos) {
return m_match_cache[i].second;
}
vector<Word *> words;
Word *word;
int orig_pos = m_pos;
int n = 0, len;
while (m_pos < m_text_length) {
if (n >= max_word_length()) break;
len = next_char();
if (len <= 1) break;
m_pos += len;
n++;
word = dict::get(m_text + orig_pos, m_pos - orig_pos);
if (word) words.push_back(word);
}
m_pos = orig_pos;
if (words.empty()) {
word = get_tmp_word();
word->nbytes = next_char();
word->length = -1;
strncpy(word->text, m_text + m_pos, word->nbytes);
word->text[word->nbytes] = '\0';
words.push_back(word);
}
m_match_cache[m_match_cache_i] = make_pair(m_pos, words);
m_match_cache_i++;
if (m_match_cache_i >= match_cache_size) m_match_cache_i = 0;
return words;
}
}
#ifndef _ALGORITHM_H_
#define _ALGORITHM_H_
#include <vector>
#include <string>
#include "chunk.h"
#include "dict.h"
#include "token.h"
/**
* The Algorithm of MMSeg use four rules:
* - Maximum matching rule
* - Largest average word length rule
* - Smallest variance of word length rule
* - Largest sum of degree of morphemic freedom of one-character
* words rule
*/
namespace rmmseg {
class Algorithm {
public:
// Algorithm(const char *text, int length)
Algorithm(const std::string text)
: m_text(text.c_str()),
m_text_(text),
m_pos(0),
m_text_length(text.size()),
m_tmp_words_i(0),
m_match_cache_i(0) {
for (int i = 0; i < match_cache_size; ++i) m_match_cache[i].first = -1;
}
Token next_token();
const char *get_text() const { return m_text; }
private:
Token get_basic_latin_word();
Token get_cjk_word(int);
std::vector<Chunk> create_chunks();
int next_word();
int next_char();
std::vector<Word *> find_match_words();
int max_word_length() { return 8; }
const char *m_text;
// https://github.com/pybind/pybind11/issues/2245
// NOTE: m_text will be cleaned by gc, so hold string by m_test_
const std::string m_text_;
int m_pos;
int m_text_length;
/* tmp words are only for 1-char words which
* are not exist in the dictionary. It's length
* value will be set to -1 to indicate it is
* a tmp word. */
Word *get_tmp_word() {
if (m_tmp_words_i >= max_tmp_words) m_tmp_words_i = 0; // round wrap
return &m_tmp_words[m_tmp_words_i++];
}
/* related to max_word_length and match_cache_size */
static const int max_tmp_words = 512;
Word m_tmp_words[max_tmp_words];
int m_tmp_words_i;
/* match word caches */
static const int match_cache_size = 3;
typedef std::pair<int, std::vector<Word *>> match_cache_t;
match_cache_t m_match_cache[match_cache_size];
int m_match_cache_i;
};
}
#endif /* _ALGORITHM_H_ */
#ifndef _CHUNK_H_
#define _CHUNK_H_
#include <cmath>
#include "word.h"
namespace rmmseg {
/**
* A chunk stores 3 (or less) successive words.
*/
struct Chunk {
int total_length() const {
int len = 0;
for (int i = 0; i < n; ++i) len += std::abs(words[i]->length);
// if (words[i]->length == -1) /* tmp word */
// len += 1;
// else
// len += words[i]->length;
return len;
}
double average_length() const { return ((double)total_length()) / n; }
double variance() const {
double avg = average_length();
double sqr_sum = 0;
double tmp;
for (int i = 0; i < n; ++i) {
tmp = std::abs(words[i]->length);
// if (tmp == -1)
// tmp = 1;
tmp = tmp - avg;
sqr_sum += tmp * tmp;
}
return std::sqrt(sqr_sum);
}
int degree_of_morphemic_freedom() const {
int sum = 0;
for (int i = 0; i < n; ++i) sum += words[i]->freq;
return sum;
}
int n;
Word *words[3];
};
}
#endif /* _CHUNK_H_ */
#include <cstdio>
#include "dict.h"
using namespace std;
namespace rmmseg {
struct Entry {
Word *word;
Entry *next;
};
const size_t init_size = 262147;
const size_t max_density = 5;
/*
Table of prime numbers 2^n+a, 2<=n<=30.
*/
static size_t primes[] = {
524288 + 21,
1048576 + 7,
2097152 + 17,
4194304 + 15,
8388608 + 9,
16777216 + 43,
33554432 + 35,
67108864 + 15,
134217728 + 29,
268435456 + 3,
536870912 + 11,
1073741824 + 85,
};
static size_t n_bins = init_size;
static size_t n_entries = 0;
static Entry **bins =
static_cast<Entry **>(std::calloc(init_size, sizeof(Entry *)));
static size_t new_size() {
for (size_t i = 0; i < sizeof(primes) / sizeof(primes[0]); ++i) {
if (primes[i] > n_bins) {
return primes[i];
}
}
// TODO: raise exception here
return n_bins;
}
static unsigned int hash(const char *str, int len) {
unsigned int key = 0;
while (len--) {
key += *str++;
key += (key << 10);
key ^= (key >> 6);
}
key += (key << 3);
key ^= (key >> 11);
key += (key << 15);
return key;
}
static void rehash() {
size_t new_n_bins = new_size();
Entry **new_bins =
static_cast<Entry **>(calloc(new_n_bins, sizeof(Entry *)));
Entry *entry, *next;
unsigned int hash_val;
for (size_t i = 0; i < n_bins; ++i) {
entry = bins[i];
while (entry) {
next = entry->next;
hash_val =
hash(entry->word->text, entry->word->nbytes) % new_n_bins;
entry->next = new_bins[hash_val];
new_bins[hash_val] = entry;
entry = next;
}
}
free(bins);
n_bins = new_n_bins;
bins = new_bins;
}
namespace dict {
/**
* str: the base of the string
* len: length of the string (in bytes)
*
* str may be a substring of a big chunk of text thus not nul-terminated,
* so len is necessary here.
*/
Word *get(const char *str, int len) {
unsigned int h = hash(str, len) % n_bins;
Entry *entry = bins[h];
if (!entry) return NULL;
do {
if (len == entry->word->nbytes &&
strncmp(str, entry->word->text, len) == 0)
return entry->word;
entry = entry->next;
} while (entry);
return NULL;
}
void add(Word *word) {
unsigned int hash_val = hash(word->text, word->nbytes);
unsigned int h = hash_val % n_bins;
Entry *entry = bins[h];
if (!entry) {
if (n_entries / n_bins > max_density) {
rehash();
h = hash_val % n_bins;
}
entry = static_cast<Entry *>(pool_alloc(sizeof(Entry)));
entry->word = word;
entry->next = NULL;
bins[h] = entry;
n_entries++;
return;
}
bool done = false;
do {
if (word->nbytes == entry->word->nbytes &&
strncmp(word->text, entry->word->text, word->nbytes) == 0) {
/* Overwriting. WARNING: the original Word object is
* permanently lost. This IS a memory leak, because
* the memory is allocated by pool_alloc. Instead of
* fixing this, tuning the dictionary file is a better
* idea
*/
entry->word = word;
done = true;
break;
}
entry = entry->next;
} while (entry);
if (!done) {
entry = static_cast<Entry *>(pool_alloc(sizeof(Entry)));
entry->word = word;
entry->next = bins[h];
bins[h] = entry;
n_entries++;
}
}
bool load_chars(const char *filename) {
FILE *fp = fopen(filename, "r");
if (!fp) {
return false;
}
const size_t buf_len = 24;
char buf[buf_len];
char *ptr;
while (fgets(buf, buf_len, fp)) {
// NOTE: there SHOULD be a newline at the end of the file
buf[strlen(buf) - 1] = '\0'; // truncate the newline
ptr = strchr(buf, ' ');
if (!ptr) continue; // illegal input
*ptr = '\0';
add(make_word(ptr + 1, 1, atoi(buf)));
}
fclose(fp);
return true;
}
bool load_words(const char *filename) {
FILE *fp = fopen(filename, "r");
if (!fp) {
return false;
}
const int buf_len = 48;
char buf[buf_len];
char *ptr;
while (fgets(buf, buf_len, fp)) {
// NOTE: there SHOULD be a newline at the end of the file
buf[strlen(buf) - 1] = '\0'; // truncate the newline
ptr = strchr(buf, ' ');
if (!ptr) continue; // illegal input
*ptr = '\0';
add(make_word(ptr + 1, atoi(buf), 0));
}
fclose(fp);
return true;
}
}
}
#ifndef _DICT_H_
#define _DICT_H_
#include "word.h"
/**
* A dictionary is a hash table of
* - key: string
* - value: word
*
* Dictionary data can be loaded from files. Two type of dictionary
* files are supported:
* - character file: Each line contains a number and a character,
* the number is the frequency of the character.
* The frequency should NOT exceeds 65535.
* - word file: Each line contains a number and a word, the
* number is the character count of the word.
*/
namespace rmmseg {
/* Instead of making a class with only one instance, i'll not
* bother to make it a class here. */
namespace dict {
void add(Word *word);
bool load_chars(const char *filename);
bool load_words(const char *filename);
Word *get(const char *str, int len);
}
}
#endif /* _DICT_H_ */
#include "memory.h"
#define PRE_ALLOC_SIZE 2097152 /* 2MB */
namespace rmmseg {
char *_pool_base = static_cast<char *>(std::malloc(PRE_ALLOC_SIZE));
size_t _pool_size = PRE_ALLOC_SIZE;
}
#ifndef _MEMORY_H_
#define _MEMORY_H_
#include <cstdlib>
/**
* Pre-allocate a chunk of memory and allocate them in small pieces.
* Those memory are never freed after allocation. Used for persist
* data like dictionary contents that will never be destroyed unless
* the application exited.
*/
namespace rmmseg {
const size_t REALLOC_SIZE = 2048; /* 2KB */
extern size_t _pool_size;
extern char *_pool_base;
inline void *pool_alloc(size_t len) {
void *mem = _pool_base;
if (len <= _pool_size) {
_pool_size -= len;
_pool_base += len;
return mem;
}
/* NOTE: the remaining memory is simply discard, which WILL
* cause memory leak. However, this function is not for allocating
* large object. Larger pre-alloc chunk size will also reduce the
* impact of this leak. So this is generally not a problem. */
_pool_base = static_cast<char *>(std::malloc(REALLOC_SIZE));
mem = _pool_base;
_pool_base += len;
_pool_size = REALLOC_SIZE - len;
return mem;
}
}
#endif /* _MEMORY_H_ */
#include <pybind11/pybind11.h>
#include <iostream>
#include <string>
#include "algor.h"
#include "dict.h"
#include "token.h"
#include "utils.h"
namespace py = pybind11;
#define STRINGIFY(x) #x
#define MACRO_STRINGIFY(x) STRINGIFY(x)
struct Token {
const char *text;
int offset;
int length;
};
PYBIND11_MODULE(mmseg, m) {
// String literal: https://en.cppreference.com/w/cpp/language/string_literal
m.doc() = R"pbdoc(
MMSeg pybind
)pbdoc";
m.def("load_chars", [](const char *path) {
if (rmmseg::dict::load_chars(path)) {
return true;
}
return false;
});
m.def("load_words", [](const char *path) {
if (rmmseg::dict::load_words(path)) {
return true;
}
return false;
});
m.def("add", [](const char *word, int len, int freq) {
/*
* Add a word to the in-memory dictionary.
*
* - word is a String.
* - length is number of characters (not number of
* bytes) of the
* word to be added.
* - freq is the frequency of the word. This is only
* used when
* it is a one-character word.
*/
rmmseg::Word *w = rmmseg::make_word(word, len, freq, strlen(word));
rmmseg::dict::add(w);
});
m.def("has_word", [](const char *word) {
if (rmmseg::dict::get(word, static_cast<int>(strlen(word)))) {
return true;
}
return false;
});
py::class_<rmmseg::Token>(m, "Token")
.def(py::init([](std::string str) {
return rmmseg::Token(str.c_str(), str.size());
}))
.def_property_readonly("text",
[](rmmseg::Token &self) {
return std::string(self.text, self.length);
})
.def_readonly("length", &rmmseg::Token::length)
.def("__repr__",
[](rmmseg::Token &self) {
return "<Token " + std::string(self.text, self.length) + " " +
std::to_string(self.length) + ">";
})
.def("__str__", [](rmmseg::Token &self) {
return std::string(self.text, self.length);
});
py::class_<rmmseg::Algorithm>(m, "Algorithm")
//.def(py::init<const char *, int>(), py::keep_alive<1, 2>())
.def(py::init([](std::string str) { return rmmseg::Algorithm(str); }))
.def("get_text",
[](rmmseg::Algorithm &self) { return self.get_text(); },
py::return_value_policy::reference)
.def("next_token",
[](rmmseg::Algorithm &self) { return self.next_token(); });
#ifdef VERSION_INFO
m.attr("__version__") = MACRO_STRINGIFY(VERSION_INFO);
#else
m.attr("__version__") = "dev";
#endif
}
\ No newline at end of file
#ifndef _RULES_H_
#define _RULES_H_
#include <algorithm>
#include <vector>
#include "chunk.h"
namespace rmmseg {
template <typename Cmp>
void take_highest(std::vector<Chunk> &chunks, const Cmp &cmp) {
unsigned int i = 1, j;
for (j = 1; j < chunks.size(); ++j) {
int rlt = cmp(chunks[j], chunks[0]);
if (rlt > 0) i = 0;
if (rlt >= 0) std::swap(chunks[i++], chunks[j]);
}
chunks.erase(chunks.begin() + i, chunks.end());
}
struct MMCmp_t {
int operator()(const Chunk &a, const Chunk &b) const {
return a.total_length() - b.total_length();
}
} MMCmp;
void mm_filter(std::vector<Chunk> &chunks) { take_highest(chunks, MMCmp); }
struct LAWLCmp_t {
int operator()(const Chunk &a, const Chunk &b) const {
double rlt = a.average_length() - b.average_length();
if (rlt == 0) return 0;
if (rlt > 0) return 1;
return -1;
}
} LAWLCmp;
void lawl_filter(std::vector<Chunk> &chunks) { take_highest(chunks, LAWLCmp); }
struct SVWLCmp_t {
int operator()(const Chunk &a, const Chunk &b) const {
double rlt = a.variance() - b.variance();
if (rlt == 0) return 0;
if (rlt < 0) return 1;
return -1;
}
} SVWLCmp;
void svwl_filter(std::vector<Chunk> &chunks) { take_highest(chunks, SVWLCmp); }
struct LSDMFOCWCmp_t {
int operator()(const Chunk &a, const Chunk &b) const {
return a.degree_of_morphemic_freedom() -
b.degree_of_morphemic_freedom();
}
} LSDMFOCWCmp;
void lsdmfocw_filter(std::vector<Chunk> &chunks) {
take_highest(chunks, LSDMFOCWCmp);
}
}
#endif /* _RULES_H_ */
#ifndef _TOKEN_H_
#define _TOKEN_H_
namespace rmmseg {
struct Token {
Token(const char *txt, int len) : text(txt), length(len) {}
// `text' may or may not be nul-terminated, its length
// should be stored in the `length' field.
//
// if length is 0, this is an empty token
const char *text;
int length;
};
}
#endif /* _TOKEN_H_ */
#include <Python.h>
#include <string.h>
char *PyMem_Strndup(const char *str, size_t len) {
if (str != NULL) {
char *copy = PyMem_New(char, len + 1);
if (copy != NULL) memcpy(copy, str, len);
copy[len] = '\0';
return copy;
}
return NULL;
}
char *PyMem_Strdup(const char *str) { return PyMem_Strndup(str, strlen(str)); }
char *reprn(char *str, size_t len) {
static char strings[10240];
static size_t current = 0;
size_t reqlen = 2;
char c, *out, *write, *begin = str, *end = str + len;
while (begin < end) {
c = *begin;
if (c == '\'') {
reqlen += 2;
} else if (c == '\r') {
reqlen += 2;
} else if (c == '\n') {
reqlen += 2;
} else if (c == '\t') {
reqlen += 2;
} else if (c < ' ') {
reqlen += 3;
} else {
reqlen++;
}
begin++;
}
if (reqlen > 10240) {
reqlen = 10240;
}
if (current + reqlen > 10240) {
current = 0;
}
begin = str;
end = str + len;
out = write = strings + current;
*write++ = '\'';
while (begin < end) {
c = *begin;
if (c == '\'') {
if (write + 5 >= strings + 10240) break;
sprintf(write, "\\'");
write += 2;
} else if (c == '\r') {
if (write + 5 >= strings + 10240) break;
sprintf(write, "\\r");
write += 2;
} else if (c == '\n') {
if (write + 5 >= strings + 10240) break;
sprintf(write, "\\n");
write += 2;
} else if (c == '\t') {
if (write + 5 >= strings + 10240) break;
sprintf(write, "\\t");
write += 2;
} else if (c < ' ') {
if (write + 6 >= strings + 10240) break;
sprintf(write, "\\x%02x", c);
write += 3;
} else {
if (write + 4 >= strings + 10240) break;
*write++ = c;
}
begin++;
}
*write++ = '\'';
*write++ = '\0';
current += (size_t)(write - out);
return out;
}
char *repr(char *str) { return reprn(str, strlen(str)); }
#ifndef _WORD_H_
#define _WORD_H_
#include <climits>
#include <cstring>
#include "memory.h"
namespace rmmseg {
const int word_embed_len = 4; /* at least 1 char (3 bytes+'\0') */
struct Word {
unsigned char nbytes; /* number of bytes */
char length; /* number of characters */
unsigned short freq;
char text[word_embed_len];
};
/**
* text: the text of the word.
* length: number of characters (not bytes).
* freq: the frequency of the word.
*/
inline Word *make_word(const char *text,
int length = 1,
int freq = 0,
int nbytes = -1) {
if (freq > USHRT_MAX) freq = USHRT_MAX; /* avoid overflow */
if (nbytes == -1) nbytes = static_cast<int>(std::strlen(text));
Word *w = static_cast<Word *>(
pool_alloc(sizeof(Word) + nbytes + 1 - word_embed_len));
w->nbytes = nbytes;
w->length = length;
w->freq = freq;
std::strncpy(w->text, text, nbytes);
w->text[nbytes] = '\0';
return w;
}
}
#endif /* _WORD_H_ */
# [options]
# python_requires = >=3.7
# setup_requires =
# setuptools
# wheel
# pybind11
# packages = find:
# [build-system]
# requires = ["setuptools>=42", "wheel", "pybind11~=2.6.1"]
# build-backend = "setuptools.build_meta"
\ No newline at end of file
#!/usr/bin/env python3
from setuptools import setup
# Available at setup time due to pyproject.toml
from pybind11.setup_helpers import Pybind11Extension, build_ext
from pybind11 import get_cmake_dir
VERSION_INFO = (1, 2, 0)
DATE_INFO = (2013, 2, 10) # YEAR, MONTH, DAY
VERSION = '.'.join(str(i) for i in VERSION_INFO)
REVISION = '%04d%02d%02d' % DATE_INFO
BUILD_INFO = "MMSeg v" + VERSION + " (" + REVISION + ")"
AUTHOR = "pluskid & kronuz & zsp007"
AUTHOR_EMAIL = 'pluskid@gmail.com'
URL = 'http://github.com/pluskid/pymmseg-cpp'
DOWNLOAD_URL = 'https://github.com/pluskid/pymmseg-cpp/archive/master.tar.gz'
LICENSE = "MIT"
PROJECT = "pymmseg"
def read(fname):
import os
try:
return open(os.path.join(os.path.dirname(__file__),
fname)).read().strip()
except IOError:
return ''
extra = {}
import sys
if sys.version_info >= (3, 0):
extra.update(use_2to3=True, )
# The main interface is through Pybind11Extension.
# * You can add cxx_std=11/14/17, and then build_ext can be removed.
# * You can set include_pybind11=false to add the include directory yourself,
# say from a submodule.
#
# Note:
# Sort input source files if you glob sources to ensure bit-for-bit
# reproducible builds (https://github.com/pybind/python_example/pull/53)
ext_modules = [
Pybind11Extension(
"mmseg",
[
'mmseg/mmseg-cpp/mmseg.cpp', 'mmseg/mmseg-cpp/algor.cpp',
'mmseg/mmseg-cpp/dict.cpp', 'mmseg/mmseg-cpp/memory.cpp'
],
include_dirs=['mmseg/mmseg-cpp'],
# Example: passing in the version to the compiled code
define_macros=[('VERSION_INFO', VERSION_INFO)],
),
]
setup(
name=PROJECT,
version=VERSION,
description=read('DESCRIPTION'),
long_description=read('README'),
author=AUTHOR,
author_email=AUTHOR_EMAIL,
url=URL,
download_url=DOWNLOAD_URL,
license=LICENSE,
keywords='mmseg chinese word segmentation tokenization',
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent", "Programming Language :: Python",
"Programming Language :: Python :: 3", "Topic :: Text Processing",
"Topic :: Software Development :: Libraries :: Python Modules"
],
setup_requires=["pybind11"],
install_requires=["pybind11"],
#packages=['mmseg'],
ext_modules=ext_modules,
extras_require={"test": "pytest"},
# Currently, build_ext only provides an optional "highest supported C++
# level" feature, but in the future it may provide more features.
cmdclass={"build_ext": build_ext},
package_data={'mmseg': ['data/*.dic']},
scripts=['bin/pymmseg'],
**extra)
import mmseg
import os
print(mmseg.load_chars('mmseg/data/chars.dic'))
print(mmseg.load_words('mmseg/data/words.dic'))
print(mmseg.has_word('我'))
print(dir(mmseg.Token))
print(dir(mmseg.Algorithm))
string="我是中国人武汉长江大桥".encode('utf8')
string="我是中国人武汉长江大桥"
t = mmseg.Token(string)
print(t)
print(t.text)
print("="*20)
a = mmseg.Algorithm(string)
print(a.get_text())
print(a.get_text().encode('utf8'))
print("="*20)
print(string)
while True:
tk = a.next_token()
if tk.length == 0:
break
c = string
#print(a.get_text())
#print(tk.length)
print(tk.text)
echo "hello world" | ./bin/pymmseg
echo "我是中国人武汉长江大桥" | ./bin/pymmseg
echo "我是,中国人。武汉长江大桥" | ./bin/pymmseg
echo "我是中国人。hello.武汉长江大桥" | ./bin/pymmseg
echo "我是中国人,hello.武汉长江大桥。" | ./bin/pymmseg
echo "我是中国人,hello.武汉长江大桥。" | ./bin/pymmseg
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册