使用 ASCIIFoldingFilter 中的静态 “foldToAscii” 方法。

huangapple go评论75阅读模式
英文:

Usage of static "foldToAscii" method in ASCIIFoldingFilter

问题

public static String normalizeText(String text, boolean shouldTrim, boolean shouldLowerCase) {
    if (Strings.isNullOrEmpty(text)) {
        return text;
    }
    if (shouldTrim) {
        text = text.trim();
    }
    if (shouldLowerCase) {
        text = text.toLowerCase();
    }
    char[] charArray = text.toCharArray();

    // once a character is normalized it could become more than 1 character.
    // Official document says the output length should be of size >= length * 4.
    char[] out = new char[charArray.length * 4 + 1];
    int outLength = ASCIIFoldingFilter.foldToASCII(charArray, 0, out, 0, charArray.length);
    return String.copyValueOf(out, 0, outLength);
}

However, as per the official documentation, the method has a note "This API is for internal purposes only and might change in incompatible ways in the next release." The alternative is to use the foldToASCII(char[] input, int length) non-static method (this method internally calls the same static method) but using it requires preparing ASCII folding filter, token filter, token stream, an analyzer. This might involve creating a custom analyzer. Examples of developers using this approach are not readily available.

It's worth noting that some open source projects use the static foldToASCII, raising the question of whether it is truly beneficial to use the non-static foldToASCII.

Official documentation

英文:

I have been using ASCII folding filter to handle diacritics for not just the documents in elastic search but various other kinds of strings.

public static String normalizeText(String text, boolean shouldTrim, boolean shouldLowerCase) {
        if (Strings.isNullOrEmpty(text)) {
            return text;
        }
        if (shouldTrim) {
            text = text.trim();
        }
        if (shouldLowerCase) {
            text = text.toLowerCase();
        }
        char[] charArray = text.toCharArray();

        // once a character is normalized it could become more than 1 character. Official document says the output
        // length should be of size >= length * 4.
        char[] out = new char[charArray.length * 4 + 1];
        int outLength = ASCIIFoldingFilter.foldToASCII(charArray, 0, out, 0, charArray.length);
        return String.copyValueOf(out, 0, outLength);
    }

However, as per the official documentation, the method has a note This API is for internal purposes only and might change in incompatible ways in the next release. The alternative is to use foldToASCII(char[] input, int length) non-static method (this method internally calls the same static method) but using it requires preparing ascii folding filter, token filter, token stream, an analyzer (this requires choosing the kind of analyzer and I might have to create a custom one). I couldn't find examples where the developers have done the latter.
I tried writing some solutions of my own, but non-static foldingToAscii doesn't return the exact output, it attaches a list of unwanted characters in the end. I am wondering how various developers have dealt with this?

EDIT: I also see that some open source projects are using static foldToAscii so another question would be if it is really worth it to use non static foldToAscii

答案1

得分: 1

根据@andrewJames的评论,以下是我能够得出的最接近的方法,不使用静态方法。 KeyworkdTokenizer 将整个输入作为单个标记发出,因此无需遍历标记。

String text = "Caffè";
String output = "";

try (Analyzer analyzer = CustomAnalyzer.builder()
          .withTokenizer(KeywordTokenizerFactory.class)
          .addTokenFilter(ASCIIFoldingFilterFactory.class)
          .build()) {
    try (TokenStream ts = analyzer.tokenStream(null, new StringReader(text))) {
        CharTermAttribute charTermAtt = ts.addAttribute(CharTermAttribute.class);
        ts.reset();
        if (ts.incrementToken()) {
            output = charTermAtt.toString();
        }
        ts.end();
    }
} catch (IOException e) {
}

System.out.println(output);
英文:

Based on comment by @andrewJames, below is the closest I was able to come up with not using the static method. KeyworkdTokenizer emits the entire input as a single token, so there is no need to loop through tokens.

String text = "Caffè";
String output = "";

try (Analyzer analyzer = CustomAnalyzer.builder()
		      .withTokenizer(KeywordTokenizerFactory.class)
		      .addTokenFilter(ASCIIFoldingFilterFactory.class)
		      .build()) {
    try (TokenStream ts = analyzer.tokenStream(null, new StringReader(text))) {
        CharTermAttribute charTermAtt = ts.addAttribute(CharTermAttribute.class);
        ts.reset();
        if (ts.incrementToken()) {
            output = charTermAtt.toString();
        }
        ts.end();
    }
} catch (IOException e) {
}

System.out.println(output);

huangapple
  • 本文由 发表于 2020年8月26日 03:07:17
  • 转载请务必保留本文链接:https://go.coder-hub.com/63585571.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定