I know little about NLP, I read nothing about word segmentation algorithm; however, given my knowledge on hidden markov model and high-order markov model, I though word segmentation is, given
- a sequence of characters ,
- a set of words (each word is a sequence of characters), and
- a language model (first-order transitions between words)
to infer segments in , and get .
If the optimal segmentation is the one maximizing , then it can be solved using dynamic programming strategy. Use the following notations:
- : the maximal length of a word,
- : the length of string ,
- : the special word denoting the start of a sentence,
- : the special word denoting the end of a sentence.
the dynamic programming solution is as the following recursive process:
- Stop: if , then .
As a tail-recursion, this recursive process can be expressed as an equivalent iteration form: starting by checking the M possible words near the end of the sentence, for each of them, check possible word segmentation at the end of rest sentence.
Alternatively, above procedure can be expressed in an equivalent form: segment the first word at the end of the sentence. This alternative expression results in an iteration starting word segmentation from the beginning of the sentence.
If each processor has the whole language model, it would be trivial to parallelize above procedure. For example, in the Start step, we need to compute M branches before taking . These M computations can be distributed on M processors.
But if the language model is huge and is distributed on multiple processors, it seems difficult (?) to divide-and-conquer the segmentation over these processors with an efficient sharding of both computational structural (iteration steps) and storage structural (parts of language model).
The final question is: usually the sentence is not long, it might not be worth to distribute it over processors for distributed segmentation.