1. Introduction
  2. Related Works
    1. Enhancing LLMs’ Translation Capabilities with ICL

      This work builds on previous research that has focused on enhancing LLMs’ translation capabilities without additional training. A primary strategy involves leveraging LLMs’ prompt learning ability, which is an ability to learn from demonstrations or descriptions (Brown et al. 2020; Wei et al. 2022). Studies have explored selecting appropriate exemplars for few-shot learning and demonstrating linguistic knowledge(Agrawal et al. 2022; Vilar et al. 2023; Zhang et al. 2024). Besides providing a demonstration or a description, choosing the right temperature or prompting strategy has also been examined (Peng et al. 2023). Similar to previous researches, our method aim to improve LLMs’ MT capabilities without further fine-tuning, but rather focuses on eliciting models’ own capabilities rather than augmenting it with external knowledge sources.

    2. Self-generated Prompts

      Generating appropriate exemplars for in-context learning manually can be resource-intensive in many cases. To address this, previous works have explored enabling models to generate their own few-shot examples for tasks such as classification(Lyu et al. 2023; Kim et al. 2022) or other reasoning tasks(Zhang et al. 2022; Li et al. 2024). Our work is related to these efforts, as it also involves generating its own few-shot examples. The difference between our study and previous ones is that we take a different approach, which mitigates the potential noise of synthesized data by gradually expanding the example set with similar yet distinct examples.

  3. Method
    1. Sentence Interpolation
    2. Gradual MT
    3. Overall Framework
  4. Experiment
    1. Setup
      1. Translation Model
      2. Interpolation Model
      3. Dataset
      4. Start Sentence Pool Generation

  1. Results
    1. Overall Results
  2. Analysis
    1. Ablation

      To find the optimal setup for the framework, we’ve conducted an ablation study on en-ko MT task with four different types of strategies we could choose: start sentence selection method, number of start sentences, method for aggregating gradual MT results, and method for choosing source sentences to translate with this framework. In this section, we briefly explain the effects of each strategies by averaging the QE scores. Full results with every different combination of strategies can be found in Appendix.

      1. start sentence selection method

        Start sentence selection method is a me

        1. Start Selection Strategy 별 평균 점수
      2. start sents number

        1. Number 별 평균 점수
      3. aggregation strategy

        1. Strategy 별 평균 점수
      4. end sent filtering

        1. Filtering 방식 별 overall score
        2. 진행된 건들에 대한 Delta 따로 계산
    2. Characteristics of Interpolation

      1. Basic Statistics

        1. rate of interpolation without errors
        2. 평균 간격
        3. 평균 progress
        4. Max Leap

        We also conducted a further analysis to see various characteristics of sentence interpolation. Below are the list of variables we have checked.

        • Interval between sentences
        • PCA Alignment
        • Numb
        • Maximum Interval between sentences
        • Interval between last interpolated sentence and
      2. Types of Interpolation

  3. Conclusion
  4. Future work