Five Mistakes We Paid Dearly for

Mistake. Setback. Critical error. Fiasco. Loss. Call them what you will, they’re all different names for the same tough spot where you have to decide if you’re going to panic and fall into depression or learn your lesson and find a way out. The lion’s share of the mistakes we’ve made, as you might imagine, have come when we simply didn’t know what we were doing. They’re due to thoughtlessness, inexperience, or human error. But then again, even professionals slip up. Here at Allcorrect, we’ve seen our share of both sides, and it’s time to look back on them with a smile. 😊 Let’s do this.

Way back in 2010. Newbie mistake.

The O.K. translation agency is providing a set of services around translation, the Allcorrect brand yet to be born. With a few major gaming projects already landed, we have our sights set on much, much more. There’s no sales department, though we have one very successful salesperson, her friends, and friends of the company. One of those referrals gets us an order from a major European software publisher. And it’s not just any old order! We’ve been hired to localize into Russian the very same professional software our musician friends use. The Okey translation agency is over the moon.

But there’s a catch: we’re supposed to use SDL Passolo. That’s a visual localization tool. It shows the translation as you go, not to mention the end product, which means you can edit both the text and, for example, panel sizes. In other words, we’re supposed to deliver a localized product that’s ready to go. But we’ve never used Passolo before. Still, we decide not to say anything, figuring it’s just another CAT tool we’ll quickly learn how to use it.

A mountain of work files, a variety of additional materials, and a small library of style guides hit our inbox. We get down to business. Given that we’ve already learned a thing or two about planning and pre-translation analysis, we bring in a music creation expert (we need to figure out what all the terms mean and, more importantly, how all the different functions work) and lay out a clear game plan for the project. We even have a Passolo expert! It’s going to be a fun adventure, and we’re taking it very seriously.

The result looks pretty good too. Poking around in Passolo finally pays off as we figure out how to adjust window sizes and get everything looking perfect, put the whole thing through every quality check we can think of, chat with the experts—the works. As the deadline approaches, we submit the files and freeze to wait for feedback.

The call.

Almost the next day—it’s a Thursday—the office phone rings. And our phone never rings. That’s strange, so the most experienced member of our production team picks it up. On the other end of the line, the manager in charge of the client’s office in Germany politely explains that we will no longer be working together. The localization isn’t just awful; it has them rethinking their entire plan to release the Russian version of their product in the next two years. While we’re going to be paid, they will never work with us again under any circumstances. The answer is “no” to any question we can think of, and we only find out by some miracle that the first hour or two spent checking through the files revealed 5000 mistakes.

Three hellish days.

Panic quickly gives way to brainstorming, and we decide that something is wrong—we’ve done so much for the project that the quality has to be good. Suddenly, it hits us: it’s impossible to find that many mistakes in just an hour or two. The client must have used an automated QA system. But we don’t know about anything like that in Passolo. As it turns out, our “expert” is actually someone who used the program a few times. We begin poring over Passolo’s 300-page manual, in parallel hitting buttons to find the option we’re looking for. When we finally locate and launch the review, we’re shocked at the avalanche of alerts we’re buried under. Most of them are easy enough to fix by just adjusting panel sizes where the text has a single pixel overlapping with the border. While those spots are impossible to see with the human eye, you can’t fool the automated QA review!

That same day, we take some time out between running around to call the German office with an explanation. Obviously, it’s all written off as a technical mistake as we ask to resend the files. The client couldn’t be more disappointed. Nothing we say changes their mind, and it’s only when we promise to send everything in sterling condition by Monday morning (and we’ll decline payment if it’s subpar) that they reluctantly give in.

Starting Friday morning, we have three days to fix all the mistakes, and the first thing we do is classify them. We start by building a huge Excel spreadsheet with all the error and review types: terminology, hot keys, typos, and so on. The team splits up into squads, each of which takes charge of one type. And after we can’t find any good freelance editors willing to work morning to night over the weekend, most of the company’s management and a few in-house editors jump in. All five Kübler-Ross stages (denial, anger, bargaining, and so on), pizza deliveries, nighttime meetings—what don’t we have? Late Sunday night, the job is done and the files sent.

History repeats itself that Monday: the office phone rings. But that time, the tone is different, and we’re thanked and promised more work. The localization, at least according to the automated review, is flawless.

We ended up getting many more fantastic projects from that client, though we learned our lesson and built processes that ensure we’ve only ever gotten glowing feedback from them since. Life has been much more boring, of course.

The moral of the story is that you should always know you can pull it off and be aware of the risks that accompany failure when you’re agreeing to work in a new sphere.

2013. Newbie mistake 2.

We’re just beginning a long-term and large-scale project with a major mobile publisher. The very first orders are for English proofreading since the texts were created somewhere in the bowels of the publisher itself by a non-native speaker and therefore need a native speaker’s touch. After we do a good job with that, the client finally offers us our first multi-language project. We’ve never localized into this many languages at once, specializing to date in Korean and Chinese MMOs. The project is a big one. As it’s an entire title, we need to translate it from English into the main European languages and a couple Asian languages. The whole office is thrilled—this is our shot at graduating to the level of an MLV, or multi-language vendor.

But where are we going to find the team for such a wonderful project? With no translator database of our own, we head over to proz.com as the largest and most popular resource to look for professionals. The reviews and ratings let us find the translators we need, and we work steadily for a while. The number of projects we’re working on grows.

Our payment system is far from ideal, and quite a bit of time goes by before we notice something strange: one of our regular French translators—we’ll call him Paul Dubois—actually has a name that doesn’t look French at all. His location looks suspicious too. He turns out to be Palestinian. Right about then, we start to get our first rejected texts from the client, and they’re in French and German. The grammatical errors highlighted in the texts are what you’d expect from a fifth-grader. It hits us: a huge percentage of our work in French is just bad. Things are slightly better with German. While the translator we’re working with really is German, he has no problem using obscene language in his translations, and the game is rated 3+.

We’re forced to quickly come up with a response plan. All the content in question is cut, and we build criteria for how to pick out the parts we can keep, set up teams to neutralize everything else, and, of course, plan a set of report meetings with the client and our CEO to make sure everyone knows how we’re doing. The volume of work we have to edit or translate from scratch is so immense that it takes about a year. And that’s a huge expense for us. But this is what gave us the kick in the pants we needed to create a quality assurance system. Yes, this was the mistake we paid dearly for. On the other hand, it was a key growth point for us too since it laid the foundation for our quality assurance system and the detailed vetting process we now have in place for all linguists who work with us.

2018. Professional mistake: next level.

It’s our first full game localization into Arabic. Oh, the litany of articles written and tears shed about RTL (right-to-left) languages! But the first time you dip your toe in the water, there’s no way of knowing what it’s going to be like.

Our first Arabic project comes from a European publisher who, while large, has about as much RTL localization experience as we do. Since our processes at this point barely have any preproduction, or the stage where you’d evaluate the risks and lay out a strategy, we just take the order and jump right in. That ends up with us spending tons of resources correcting mistakes we could have easily avoided if we’d just analyzed the job before starting work on it. But hey, that’s how we learn, right?

What did we start with? One highly experienced and heavily loaded project manager juggling thousands of assignments as well as one (the first ever at Allcorrect) junior manager. One trusted Arabic linguist the whole project was simply handed to. No clear oversight system for juniors or division of responsibilities. No project schedule (we didn’t ever have them back then). No tested team of Arabic linguists since we hadn’t yet invented the fake projects we now put all our teams through (they’re paid and look like regular projects, only the results stay within the company to help us evaluate new linguists before giving them real work). Looking back, we just subcontracted the project out with a few formal reviews before passing the work on to the client.

How did that end up? The managers didn’t know who was responsible for what, and the term base slipped through the cracks. As you might imagine, that caused problems. We had major issues with consistency. Random checks were completed by unverified linguists, so there was no quality assurance to speak of. The lack of a project schedule meant we weren’t able to meet the deadline. Ultimately, RTL projects are always complicated when the manager doesn’t have at least a general idea of how the text works—what diacritics are, how tags fit in, how capitalization is conveyed (yep, there aren’t any capital letters in Arabic), what Modern Standard Arabic is, and how to evaluate quality for an artificial language (MSA is a made-up literary tool designed to eliminate regional dialects).

The client wasn’t even able to stick our Arabic text into the game when we sent it to them. That kicked off a multi-step process to figure out what happened and fix the mistake: R&D sessions, in-game testing, and lots of calls with the client. It turned out that the client had problems of their own: their engine wasn’t equipped to handle RTL texts, the interface wasn’t reversed, and the tags were all over the place. Incidentally, adding a Syrian game tester sent us through a whole different negative spiral since the translator was Egyptian and the two had very different understandings of MSA.

We were ultimately able to wrap things up well, the client was happy, and we fulfilled our obligations. But the fact that we went into the project blind and without planning made it one of the toughest we’ve ever tackled. Not only that, but the linguists went toe to toe over the differences in their dialects, and it was hard for us as an agency to tell what we wanted to walk away with in the end.

2018. Organizational mistake. Also professional, just the other side of the coin.

A multilanguage project hits our inbox, enormous by our standards today, and it’s supposed to last two or three months.

It’s important enough, so it’s assigned to a senior manager. But she is about to be promoted to head of the production department, the largest at Allcorrect, and her daily planner is packed with strategy and operations meetings she can’t miss. Ultimately, she handles her projects at night. We didn’t have a system for managing the load placed on project managers at this point. Later on, we’ll introduce labor norming, double backups, and steps we take at the first sign a manager is burning out. But it’s now when the lack of that system is felt—our senior manager has bitten off more than she can chew.

We also don’t require running a project schedule at this point, and there’s no reason to keep an eye on our best project manager. Nobody’s better at planning and running projects.

Another problem, however, is that we don’t know what kind of file management system the client has on their side. Even though we work with memoQ, we get Trados packets (Trados is a translation memory tool) with 428 Excel files per language. Each file is sorted by character, with a list of everything that character says. In other words, there’s no logical structure. It’s just a hundred lines from one character followed by a hundred from another and so on. There are no scripts, and the translators have no clue how all the quotes fit into the plot.

Each update we’re sent means reloading all 428 files and recalculating statistics based on translation memory segmentation.

With how big the files are, we have to house the project on its own server with separate login details for everyone working on it, and that also adds complications whenever we need to make a change to workflows. The first submission reveals that the client isn’t using Trados as anything more than a file storage system, and they’d actually much prefer to work directly in Excel. We move to exchanging Excel spreadsheets, only they’re segmented differently, and our translation memory has no idea how to handle the switch. Updating the original files is practically impossible.

That forces us to introduce a new requirement: we always ask the client what their workflow is for localization files.

The project also happened during the peak of the summer holidays, and putting a team together was a challenge.

With none of that working in our favor, it would be hard to call what we submitted some of our best work. We ended up with lots of negative feedback from the client and spent a long time correcting our mistakes.

What happened after that? After a Kaizen event, we discover that we as a lean company forgot how to work with big projects. We treated everything like mid-cycle mobile projects. And that was a mistake! We followed that with a deep dive into our basic principles, coming out the other side with the Allcorrect bible—the immutable truths behind lean manufacturing modified for our work. But that’s a very different story from the Allcorrect annals. 😊

Allcorrect Bible in our office

2021. Fresh mistake.

The Allcorrect portfolio includes clients and titles of all shapes and sizes. The company is growing, processes are calibrated on the fly, and we’re constantly hiring. When there’s an incessant flow of incoming talent, a large share of the work falls to juniors: junior project managers and newly christened production group leaders. Importantly, the volumes we’re dealing with necessitate standardization for more and more processes. That’s the context in which we receive a project from a publisher we’ve known for a long time but haven’t fully worked with yet. With our chance to prove ourselves having just walked in the door, the client’s requirements are perfect: we need to localize a game into six languages within tight deadlines. The texts are relatively simple with the mechanics built on logic puzzles and plays on words.

Time is short, and we jump right in. According to our processes, the first thing we need to do is run our pre-translation analysis and fill out project specifications with the client. The pre-translation analysis doesn’t flag up any problem areas since the texts are easy enough (“Mr. X lives in the green house, and Mr. Y lives in the blue house”). We get on a call with the client too early, not yet aware of the risks involved. A junior manager divides the text into two sections and hands one each to two translators for every language, a junior team lead confirms the project schedule, and work starts. The workflow has an independent editor reviewing the different translators’ work before combining everything. With no time for all that, we decide to dump the last part of the project on a single person—one of the translators is supposed to evaluate the other’s work and make sure their respective parts are consistent with each other. It’s a fatal error for two languages. The linguists we selected are up to their ears while also dealing with other active projects, and that impacts the quality of the work we submit (to be fair, the other projects also suffer, though we have room to maneuver there).

The feedback from the client didn’t take long to arrive. The puzzles built on word play didn’t work, the translation simply not conveying the key meaning. Since the linguists didn’t have the pictures for each puzzle, they were missing a key element telling them what was going on. Important character names were just transliterated in some places, losing the player in the process. To take one example, Spot, the name of an animal, was transliterated, which meant the player wasn’t able to answer a question about the spots all over it. There was confusion with meters, miles, and a size chart in the different languages we localized into: some translators converted the systems in the original to the ones they were used to, while others left them as they were. But there were also some unthinkable mistakes like “uncommon” being translated as “common,” the responsible linguist’s workload taking its toll with silly errors. Anyone who read the translation would have thought it looked fine, but it was when it was laid on top of the game mechanics that the problems reared their ugly heads.

Needless to say, we fixed everything. We tested the game for free and edited all the texts. Still, the added expense made the project a net negative, and that was compounded by the hit to our morale and reputation. The upside was that our juniors matured in record time, and we added some important steps to our workflow for logic games. Since then, we’ve avoided taking games built on logic unless we have linguistic testing as part of the deal in addition to plenty of time.

In closing, it’s worth noting that each one of these mistakes was paid for in manager tears and has formed an invaluable episode in the company’s story. But that’s a bit too poetic. Let’s just say that mistakes are always opportunities for growth over the long term. Here at Allcorrect, we know that better than anyone. :)

Allcorrect
Allcorrect is a gaming service company. We help game developers free their time from routine processes in order to focus on key tasks. We provide professional game localizations into 40+ languages and create game art of all levels of complexity. Also, we offer localization testing, voice-over, and culturalization adaption of in-game content. Our team adores games and complex projects. We’re incredibly proud of our clients, including both world-renowned AAA developers and indie companies that have successfully entered the international market.