Archive for November, 2013

American Translators Association : Publications-The ATA Chronicle : Featured Article

29/11/2013

See on Scoop.itTerminology, Computing and Translation

ATA is a professional association founded to advance the translation and interpreting professions and foster the professional development of individual translators and interpreters.

See on www.atanet.org

STOA | State of the art of Machine Translation – Current challenges and future opportunities

29/11/2013

See on Scoop.itTerminology, Computing and Translation

See on www.europarl.europa.eu

The 3rd variable in translation

28/11/2013

See on Scoop.itTerminology, Computing and Translation

Quality is the 3rd and most complex variable in translation. While measuring cost and speed can be easily done, the quality of translations is much more difficult to assess. One can compare two tra…

See on exacterm.wordpress.com

The 3rd variable in translation

28/11/2013

Image

Quality is the 3rd and most complex variable in translation. While measuring cost and speed can be easily done, the quality of translations is much more difficult to assess. One can compare two translations or even two translators by looking at the price and the speed of delivery but that won’t say anything about the quality of the product. All LSPs claim to deliver top quality translations. Still, we all know that there are translations and translations. What are the key features of good content and how do we measure them? And how can you compare the output of Machine Translation systems in a consistent way?

Measuring and benchmarking quality is easier said than done. There are a number of automated metrics such as BLEU and METEOR for measuring MT output but when it comes to measuring style, accuracy, intelligibility or grammar, these metrics seem to be rather abstract. BLEU is designed to approximate human judgment at a corpus level, and performs badly if used to evaluate the quality of individual sentences. On the flip side, existing metrics do not allow for subjective assessment, they are always relative in comparing translations to each other.

It might not be possible to automatically evaluate translations in the near future and human evaluation will always be necessary. Besides, translation quality evaluation should also include the end users of the translation. Users should be involved in defining the quality they expect. Existing QE strategies be it automatic or manual usually don’t take into account those who use the translated content on a daily basis.

The most we can do at this point is to optimize human evaluation providing best practices and tools that can standardize the evaluation process and make it more objective and transparent. A sort of “Computer Aided Quality Evaluation of Translation”. This type of evaluation should also take the quality expectations of the users, the different text-types and user-scenarios into account and, consequently, be dynamic. This is exactly what TAUS, the innovation think tank of the translation sector, came up with last year. In 2012, the DQF (Dynamic Quality Framework) was launched.

DQF enables industry collaboration on quality metrics. The shared knowledge base and tools in the framework allow users to benchmark performance and aids in the pursuit of best practices in quality evaluation. The DQF Tools provide a tool neutral and vendor independent environment for the human evaluation of machine translation quality. Users gather vital data to help establish return-on-investment, measure productivity enhancements, and benchmark performance, helping to ensure that informed decisions are made.

All in all, the platform is a product of a unique and long-expected initiative. It’s rich in content and an outstanding attempt to join academic research in translation quality with industry practices. Still quite some work needs to be done to make it a successful benchmark for translation quality in the whole industry.

For more information, please visit:

https://www.taus.net/taus-launches-translation-quality-evaluation-benchmark-platform

2014 CRITT – WCRE Conference | THE BRIDGE

28/11/2013

See on Scoop.itTerminology, Computing and Translation

See on bridge.cbs.dk

Is your CAT tool compatible with Trados Studio?

28/11/2013

See on Scoop.itTerminology, Computing and Translation

There are many reasons why you would choose to buy Trados Studio: You do not want to rule out the possibility of working with SDL Your main clients have it and you want to comply You like the interface You want to start working with other Trados…
See on marielucchetta.wordpress.com

SpeakLike Announces Human Translation API with Rules-Based Workflow

28/11/2013

See on Scoop.itTerminology, Computing and Translation

Broadway World SpeakLike announces the availability of their new Human Translation Application Programming Interface(API).
See on inttranews.inttra.net

Keep your old clients happy

28/11/2013

See on Scoop.itTerminology, Computing and Translation

It is so much easier to get your existing clients to come back for more work than it is to score new clients.
See on latitudescoachblog.com

Aligning files in Transit NXT

28/11/2013

See on Scoop.itTerminology, Computing and Translation

If you have files that were translated without a CAT tool and would like to use them as reference material, Transit NXT offers you a very powerful and easy to use alignment tool to convert them into language pairs which can … Continue reading →…
See on transitnxt.wordpress.com

Simple Facts about Translation Memory

25/11/2013

See on Scoop.itTerminology, Computing and Translation

Of the many tools used by translation professionals, one of the most difficult to grasp for non-translators is Translation Memory (TM).

See on www.onehourtranslation.com