My lifelong fascination is with how technology allows us to solve the same old problem in a different way, and equally how chances are sometimes missed, and superior solutions get sidelined for all the wrong reasons. In the early seventies, I worked for a while at PCGD, or Postgiro, the Dutch postal payment service, and I studied their business model and their technology intensely, which set me up for other IT related roles later in life. In its day, it was considered one of the most advanced implementations of the IBM/360 architecture.
In those days the network was the postal service, and in my native Holland there was a complete parallel system of mailboxes, one blue mailbox for the PCGD, next to every red mailbox for the regular mail. The payments were done with an instruction to the PCGD that was filled out on a 50-column portion of an 80-column hollerith punch card, with the 30 column stub portion serving as your own record of the transaction. The speed was incredible. Holland is an area the size of Connecticut, Massachusetts and Rhode Island combined, and for the most part if you posted your payment in a timely fashion, your payment instruction would make it to the PCGD in one day, take one day for processing, meaning it was key punched and processed so that the money would be in the other party's account within typically 48 hours, and they would in turn receive notice a day later. The banks could never beat the efficiency, and so the entire adult population of Holland, some 10 million people, had an account with the Postgiro. Even today, there is nostalgia for the old advertising campaigns, one of which featured John Cleese: Giroblauw past bij jou! (Giro blue suits you!), and a book by that name is still available for fans of this venerable institution--the URL of the website www.blauw-bloed.nl (blue blood).
Banks could not compete in part because of the federated system, and it took them a long time to develop comparable efficiencies. PCGD was not a bank, but purely a payment service, a sort of automated postal money order. As the banks caught up, the speed advantage decreased, and the PCGD tried to compete by becoming increasingly like a bank. In the mania to privatize government services, the PCGD was first merged into the postal savings bank, and became more and more like a bank, until in 2008 it was taken over by ING.
What no one noticed in the process is that the system of the PCGD had certain innate advantages, which were lost in the shuffle, but they are more relevant today than they ever were. We live in the days of increasing scrutiny of monetary transactions, and regulations like KYC and AML, all of which are weak under a federated system, because a bank knows only one side of a transaction, where as the old model of the PCGD allows one to know both sides of a transaction. This system is superior by design.
There is another advantage, which could be powerful in today's world where the network is not the postal system, but the Internet. Remember how banking works, ever since the first time someone wrote a "check," or rather a "cheque," probably on the back of a napkin in a coffee shop or its equivalent in the Achaemenid empire. From the article on Wikipedia: "The drawer writes the various details including the monetary amount, date, and a payee on the cheque, and signs it, ordering their bank, known as the drawee, to pay that person or company the amount of money stated."
The very model presents security problems, which are today coming home to roost, because our payment systems have not really evolved, except in doing the same old stuff faster, and in more different forms... a check is an authorization for a 3rd party to withdraw money from my account at the bank. A credit card transaction or an ACH are no different. This very concept introduces a security problem, for the amount could be changed, the check duplicated and presented more than once, etc. etc. The crucial concept is that this is payment by pull: a third party PULLs the money from my account. The PCGD model had at least part of the solution, for it used payment by PUSH. Originally, the user gave the PCGD an instruction to pay x amount to the PCGD account of the counterparty in the transaction. This model per se limits the potential for fraud by the payee in the transaction, and the only issues are mistakes in transmission between the Payor and the PCGD, and to address that, there was elaborate quality control on the input end of the process. This is where I worked, and I got to know the system really well, and I was keenly aware at the time about the design superiority over the concept of writing checks. Recent events at Target and Nieman-Marcus are a fresh reminder of the downside of payment by pull.
All of our electronic systems, debit cards, credit cards, ACH, etc., tend to work on the same old model: authorizing another party to withdraw money from my account/accounts, and payment fraud is rampant, because the situation is unmanageable. What if we turned it around? With today's technology, payment by PUSH would be really powerful, if it were combined with adequate authentication of the payor. Namely, instead of an unlimited number of parties that might be authorized to withdraw money from an account, now only the owner of the account could issue payments. So we have reduced the security problem from an unlimited number of potential compromises to just one.
As a corollary to the security problems of the check model, there must necessarily be consumer protections that allow repudiation of a transaction. Checks can be stopped or reversed, ACH's can be challenged, and with credit cards there are chargebacks - all necessary protections because there are too many ways to compromise a given withdrawal. In other words, all of these modern payments are only really good after their respective protections expire. Checks are revocable for nine months, ACHs for up to 48 months depending on the jurisdiction, credit card charge backs are allowable up to 6 months, and sometimes longer. These protections are absolutely necessary, because of the design of the process, and as a result none of these payment methods can take the place of cash. The only alternative is Swift wire, but at a retail cost of some $30 per transaction it is practical only in exceptional circumstances. What if there were...? Stay tuned!
The Digital Un-divide
The Digital Divide isn't. The implications of this technology generation are greater accessibility, and lower infrastructure costs, and the developing world has been skipping past entire technology generations. It is still going on today. Mobile payments took off in the "developing" world long before the developed world, etc.
Sunday, January 26, 2014
Saturday, September 29, 2012
e-Book Purgatory
The dominant theme of this irregular blog is that the so-called digital divide is more so-called than actual divide. For me the seminal example of that was the moment while do-gooders in the developed world were bemoaning the digital divide, in the developing world the realization took place that it was cheaper and easier to build out cellular networks than it was to lay copper everywhere, and soon the developing world was ahead with some actual profitable uses of cellular phones that were often more sophisticated than spoiled consumers in the developed world did. Farmers and fisherman in the developing world were using cellphones to create economic value while the developed world was busy listening to i-Tunes.
Today we are in the world of iPhone-5 and various Android phones, taking just 40 or 50 cents of current per year to be charged. Now these phones are still in the toy stage, for the batteries do not last, and of course the network that support them require extensive investment, and are far from omnipresent, but feature phones are getting ever cheaper too, and with text plus voice amazing things are already possible.
Something else is happening too. The ecosystem that supports these technologies can be located anywhere, and already there is an interesting development of locating data centers in ways that leverage renewable energy opportunities, so the Internet experience of presence at a global concept also works in reverse.
The best university courses are becoming available online, and Amazon sells more e-books than print books since some time already. But it goes further than that, the e-book phenomenon is making books omnipresent too. Being of the immigrant variety myself, I realize that where in the past the shipping cost of a Dutch book from Holland to New York, might make it prohibitive to buy Dutch books, unless I happened to be visiting, now I'll be able to read Dutch books anywhere. But hold on to your hat. Publishers have not found out yet that there are readers for their books beyond shipping reach of print books. Often times Dutch books are available within Holland but not out side, which means publishers are missing out on a market that is twice the size of Holland itself. And undoubtedly the same is going on in other countries. Think Greece. There probably are more Greeks living outside of Greece than within it. Israel, same story. China anyone? There are some Chinese populations outside of China...
In short, the digital divide once again is a joke on the developed world, not the developing world. With newer e-readers now offering as much as 8-week battery-life, they are becoming viable for use almost anywhere, and the potential for spread of the printed word is tremendously increased, and being multi-lingual is hence becoming easier. This along with the fact that even educators are finally catching up to the fact the rest of us already knew, namely that a multi-lingual education does not handicap a child, but rather improves its learning. Most likely the the correlation that is observed here is simply based on the fact that being multi-lingual would tend to develop abstract thinking much earlier.
So, as happens so often the technology is racing ahead of people still stuck in the old paradigm, and publishers and book distributors need to catch up and address themselves to new, global realities. Interestingly, I spoke with one US-based small publisher recently, who are very conscious of this situation, and came to the conclusion that they need to focus on EPUB e-books, because the global demand they see favors that format, as the MOBI format (Amazon Kindle) is mostly still a US-only phenomenon, so Sony, Barnes & Noble (nook), Kobo and others are taking the lead there - something which is not yet understood in the US press.
Now, if publishers and book distributors can also start to join the ditigal era, then they would soon find out that the markets for their books are double or more their native markets where print books can economically be distributed. In short, let's wait and see how the developed world solves this particular digital divide, which is caused entirely by them being stuck in the old paradigm of print books.
As a word of caution, we might add the note that e-books in the end are not a replacement of print books entirely. There are a lot of enduring features of print books that will truly make them viable in the long run, except my feeling is that the cheap one-read books will disappear, and books will become more of a specialty item. Publishers will have to be re-educated to produce quality books, and that means sewn in signatures, and glued, and sometimes actually bound. Durability will be a key value, both in the physical sense, but also in the sense that a book is something you actually own, and you can will it to your kids, etc. whereas an e-book is a generally non-transferable license to a single user, it is non-durable by definition, as well as in fact. Family Bibles will not be e-books.
Another place where the digital divide makes itself felt is in e-book production. I experienced this myself with my own publisher, who converted one of my books into e-books, even while I had already prepared a new edition that was suitable for the e-book format. The problem was a table which through automated conversion became an image, and unreadable, while I had already converted the table to a text format, which would have been usable in an e-reader format. Then in a recent e-book project we worked with a production company that insisted that justified text on a Sony e-reader was handled by the device and not at the file level. Which was promptly disproven by the nineteen year old son of the publisher who produced justified text on the Sony device by simply forcing justified text through the style sheet. And the last mystery we encountered was that cover pages behave differently on EPUB vs MOBI, and our production company did not know how to do it on EPUB, which is annoying, to say the least, particularly if people are touting their expertise in supporting multiple formats. To make us all feel better, the newest book by J.K. Rowling seemed to have even worse problems with its original e-book edition. So again it will take some time for this particular digital divide to close, and publishers, editors, and production companies to truly know how to produce quality e-books. As always, you cannot make it up on volume. For publishers to have relevance in the future they will have to provide superior editorial, marketing and production skills, otherwise they will become obsolete, and fall right into the digital divide.
Copyright © 2012 Rogier F. van Vlissingen. All rights reserved.
Today we are in the world of iPhone-5 and various Android phones, taking just 40 or 50 cents of current per year to be charged. Now these phones are still in the toy stage, for the batteries do not last, and of course the network that support them require extensive investment, and are far from omnipresent, but feature phones are getting ever cheaper too, and with text plus voice amazing things are already possible.
Something else is happening too. The ecosystem that supports these technologies can be located anywhere, and already there is an interesting development of locating data centers in ways that leverage renewable energy opportunities, so the Internet experience of presence at a global concept also works in reverse.
The best university courses are becoming available online, and Amazon sells more e-books than print books since some time already. But it goes further than that, the e-book phenomenon is making books omnipresent too. Being of the immigrant variety myself, I realize that where in the past the shipping cost of a Dutch book from Holland to New York, might make it prohibitive to buy Dutch books, unless I happened to be visiting, now I'll be able to read Dutch books anywhere. But hold on to your hat. Publishers have not found out yet that there are readers for their books beyond shipping reach of print books. Often times Dutch books are available within Holland but not out side, which means publishers are missing out on a market that is twice the size of Holland itself. And undoubtedly the same is going on in other countries. Think Greece. There probably are more Greeks living outside of Greece than within it. Israel, same story. China anyone? There are some Chinese populations outside of China...
In short, the digital divide once again is a joke on the developed world, not the developing world. With newer e-readers now offering as much as 8-week battery-life, they are becoming viable for use almost anywhere, and the potential for spread of the printed word is tremendously increased, and being multi-lingual is hence becoming easier. This along with the fact that even educators are finally catching up to the fact the rest of us already knew, namely that a multi-lingual education does not handicap a child, but rather improves its learning. Most likely the the correlation that is observed here is simply based on the fact that being multi-lingual would tend to develop abstract thinking much earlier.
So, as happens so often the technology is racing ahead of people still stuck in the old paradigm, and publishers and book distributors need to catch up and address themselves to new, global realities. Interestingly, I spoke with one US-based small publisher recently, who are very conscious of this situation, and came to the conclusion that they need to focus on EPUB e-books, because the global demand they see favors that format, as the MOBI format (Amazon Kindle) is mostly still a US-only phenomenon, so Sony, Barnes & Noble (nook), Kobo and others are taking the lead there - something which is not yet understood in the US press.
Now, if publishers and book distributors can also start to join the ditigal era, then they would soon find out that the markets for their books are double or more their native markets where print books can economically be distributed. In short, let's wait and see how the developed world solves this particular digital divide, which is caused entirely by them being stuck in the old paradigm of print books.
As a word of caution, we might add the note that e-books in the end are not a replacement of print books entirely. There are a lot of enduring features of print books that will truly make them viable in the long run, except my feeling is that the cheap one-read books will disappear, and books will become more of a specialty item. Publishers will have to be re-educated to produce quality books, and that means sewn in signatures, and glued, and sometimes actually bound. Durability will be a key value, both in the physical sense, but also in the sense that a book is something you actually own, and you can will it to your kids, etc. whereas an e-book is a generally non-transferable license to a single user, it is non-durable by definition, as well as in fact. Family Bibles will not be e-books.
Another place where the digital divide makes itself felt is in e-book production. I experienced this myself with my own publisher, who converted one of my books into e-books, even while I had already prepared a new edition that was suitable for the e-book format. The problem was a table which through automated conversion became an image, and unreadable, while I had already converted the table to a text format, which would have been usable in an e-reader format. Then in a recent e-book project we worked with a production company that insisted that justified text on a Sony e-reader was handled by the device and not at the file level. Which was promptly disproven by the nineteen year old son of the publisher who produced justified text on the Sony device by simply forcing justified text through the style sheet. And the last mystery we encountered was that cover pages behave differently on EPUB vs MOBI, and our production company did not know how to do it on EPUB, which is annoying, to say the least, particularly if people are touting their expertise in supporting multiple formats. To make us all feel better, the newest book by J.K. Rowling seemed to have even worse problems with its original e-book edition. So again it will take some time for this particular digital divide to close, and publishers, editors, and production companies to truly know how to produce quality e-books. As always, you cannot make it up on volume. For publishers to have relevance in the future they will have to provide superior editorial, marketing and production skills, otherwise they will become obsolete, and fall right into the digital divide.
Copyright © 2012 Rogier F. van Vlissingen. All rights reserved.
Friday, February 10, 2012
CAT Boondoggle
Once more I made a foray into computer assisted translation, and once more I was appalled. I keep looking off and on for better tools. They probably exist, but the right ones have not come along for me yet.
Computer-based or computer-assisted translation is generally caught in a bind, because they operate with a set of assumptions that are faulty.
True enough Champollion decoded hieroglyphics by means of comparing three sets of script to one another and basing his work on the notion that the three text on the Rosetta stone were relating the same thing in different languages, which gave him the keys to the Egyptian language. He was able to decipher hieroglyphics.
But ever since linguistic scholarship is getting increasingly bogged down in mistaking the symbols for the thing they represent, and generally confusing form and content. This leads to the whole absurdity of computer translation, and even computer assisted translation falls into the same trap.
With a very Platonic twist, A Course in Miracles says: "Let us not forget, however, that words are but symbols of symbols. They are thus twice removed from reality." (ACIM:M-21.1:9-10) In other words, the words give form to a symbol in the mind, behind which is a reality which is being approximated by those symbols. The same meaning could be symbolized differently in a different medium, but it should be evident that there is no direct transformation possible of one set of symbols into another, unless we first step back to the meaning behind those symbols.
My favorite example of criticism along these lines is the Dutch author Jan Willem Kaiser in his commentary on the Gospel of Mark, where he describes the failings of most all Bible translations exactly because they talk about only the manifest story, without ever grasping the content. What the Bible translators don't get is Jesus' word that it all comes to us in symbols, and the business of biblical translation continually attempts to transpose the symbols without recourse to the latent meaning. Kaiser makes the obvious point that unless the translators are actively serious students of Jesus' teachings, there is little chance of a good outcome.
Unfortunately, most translators labored under Paul's misinterpretation of Jesus, and his fundamentalist, literal distortions in which he almost consistently confuses content and form, perhaps indicated best by his belabored conclusion that Jesus was talking about the resurrection of the body, not the mind.
Those of us who study ACIM today often end up re-discovering the deeper meaning behind the symbols, and suddenly seeing through the parables of the Bible, which would lead to very different translations. Examples are plentiful, such as Jesus' term metanoia in Greek, which actually means change of mind, and it is rendered most often under the influence of Pauline theology as repentance, which was not what Jesus was talking about. He was teaching in parables of the possibility of changing our mind. An example such as the Samaritan woman at the well in John 4 becomes much clearer when you connect it with the teaching of special relationships in the Course. Indeed she was not 'married' and even her current 'husband' was not her 'husband' -- the whole episode was an account in parables of Jesus' teaching of special relationships, which our mind/soul engages in on the level of the body, and which can never be relationships because they are premised on separation and reinforce separation, the one and only real relationship is what the Course calls the Holy Relationship, which is the healing of the illusory separation that keeps our mind engaged seeking outside itself, and the restoration of the true communication through our true self. Pity the poor translators who thought Jesus was talking about 'husbands,' when he was really talking in parables of relationships, just like he was not speaking of 'breads' in Mark, but of spiritual nourishment. Nor did he speak of daily bread in the Lord's prayer. On and on.
The same is true about translation, and the two lines preceding the above quote actually clarify this, and I'll quote the para:
This week I used one of today's most popular CAT tools, Wordfast Anywhere, which interestingly was originally developed by one Yves Champollion, who is a descendant of the other Champollion. Evidently it is a powerful tool, if it is used right, but the question is if there are any 'right' uses of these tools. To illustrate that question: yesterday I found out that in Holland there is a TV quiz-program which is based on quotes from user manuals of appliances, and the contestants are supposed to guess what the appliance is, and they almost never can. That is probably a testimony to the state of computer-based and computer-assisted translation.
While we see people who can't spell stumble with spellcheckers because of homophones, translation memory, and CAT cause worse problems when they are misused, and it is harder to use them right than to misuse them. I have related my experience with a translation agency a few years ago. In that case it was a very prosaic project - a quote for the electrical installation in the residence of the American ambassador in The Hague. The first translator supposedly had been a professor of linguistics who taught Dutch at some college in NY, and according to the agency she had high proficiency with CAT. My guess was that the translator was Turkish, and had learned Dutch and English from a correspondence course. Almost all the terms of the trade were absurdly wrong, such as a ground fault interrupt circuit, which became something like and 'earth leak switch' in a literal translation from the Dutch. Those are exactly the things where a TM is supposed to help you, but again, absent an understanding of the topic, total nonsense will result. I ended up redoing the translation entirely, and correctly, and in record time, such that the agency asked me how I could do it so fast, and I told them that it was because I was not using the CAT tools which they had initially demanded I use.
Still, I continue to hold out hope for better tool sets for serious translation, the first requirement of which would be the complete elimination of any attempt at computer translation, and an understanding that this is not how the process works. Any meaningful assistance must come from better access to more and better dictionaries, grammars and so on, to support translators who are actually capable of thinking in both languages. And then if they also have some mastery of their subject matter, perhaps there is hope. In the meantime I raised my translation rates to 2-3 times higher with CAT tools than without.
Copyright © 2012 Rogier F. van Vlissingen. All rights reserved.
Computer-based or computer-assisted translation is generally caught in a bind, because they operate with a set of assumptions that are faulty.
True enough Champollion decoded hieroglyphics by means of comparing three sets of script to one another and basing his work on the notion that the three text on the Rosetta stone were relating the same thing in different languages, which gave him the keys to the Egyptian language. He was able to decipher hieroglyphics.
But ever since linguistic scholarship is getting increasingly bogged down in mistaking the symbols for the thing they represent, and generally confusing form and content. This leads to the whole absurdity of computer translation, and even computer assisted translation falls into the same trap.
With a very Platonic twist, A Course in Miracles says: "Let us not forget, however, that words are but symbols of symbols. They are thus twice removed from reality." (ACIM:M-21.1:9-10) In other words, the words give form to a symbol in the mind, behind which is a reality which is being approximated by those symbols. The same meaning could be symbolized differently in a different medium, but it should be evident that there is no direct transformation possible of one set of symbols into another, unless we first step back to the meaning behind those symbols.
My favorite example of criticism along these lines is the Dutch author Jan Willem Kaiser in his commentary on the Gospel of Mark, where he describes the failings of most all Bible translations exactly because they talk about only the manifest story, without ever grasping the content. What the Bible translators don't get is Jesus' word that it all comes to us in symbols, and the business of biblical translation continually attempts to transpose the symbols without recourse to the latent meaning. Kaiser makes the obvious point that unless the translators are actively serious students of Jesus' teachings, there is little chance of a good outcome.
Unfortunately, most translators labored under Paul's misinterpretation of Jesus, and his fundamentalist, literal distortions in which he almost consistently confuses content and form, perhaps indicated best by his belabored conclusion that Jesus was talking about the resurrection of the body, not the mind.
Those of us who study ACIM today often end up re-discovering the deeper meaning behind the symbols, and suddenly seeing through the parables of the Bible, which would lead to very different translations. Examples are plentiful, such as Jesus' term metanoia in Greek, which actually means change of mind, and it is rendered most often under the influence of Pauline theology as repentance, which was not what Jesus was talking about. He was teaching in parables of the possibility of changing our mind. An example such as the Samaritan woman at the well in John 4 becomes much clearer when you connect it with the teaching of special relationships in the Course. Indeed she was not 'married' and even her current 'husband' was not her 'husband' -- the whole episode was an account in parables of Jesus' teaching of special relationships, which our mind/soul engages in on the level of the body, and which can never be relationships because they are premised on separation and reinforce separation, the one and only real relationship is what the Course calls the Holy Relationship, which is the healing of the illusory separation that keeps our mind engaged seeking outside itself, and the restoration of the true communication through our true self. Pity the poor translators who thought Jesus was talking about 'husbands,' when he was really talking in parables of relationships, just like he was not speaking of 'breads' in Mark, but of spiritual nourishment. Nor did he speak of daily bread in the Lord's prayer. On and on.
The same is true about translation, and the two lines preceding the above quote actually clarify this, and I'll quote the para:
Strictly speaking, words play no part at all in healing. 2 The motivating factor is prayer, or asking. 3 What you ask for you receive. 4 But this refers to the prayer of the heart, not to the words you use in praying. 5 Sometimes the words and the prayer are contradictory; sometimes they agree. 6 It does not matter. 7 God does not understand words, for they were made by separated minds to keep them in the illusion of separation. 8 Words can be helpful, particularly for the beginner, in helping concentration and facilitating the exclusion, or at least the control, of extraneous thoughts. 9 Let us not forget, however, that words are but symbols of symbols. 10 They are thus twice removed from reality. (ACIM:M-21.1)The only reason why Ken Wapnick, Ph.D., as a clinical psychologist and not a linguist, has been effective in overseeing the translation of ACIM in all the languages in which it now appears, is because he has focused on content with the translators, not on form. And of course the translators need to master their respective languages, but the effectiveness of their translations hinges only on their comprehension of the content. Once that is in place they can express that understanding in their respective tongues. The accounts of Chiao Lin Cabanne with her Chinese translations of the Course speak volumes. She did the first translation into classical Chinese based on her linguistic knowledge, and then started learning the Course, and tranlated it into today's simplified Chinese, only to discover that as she learned the teachings of the Course, she had to do her first translation over, so she did, and her new translation in classical Chinese now reflects her own growth with the material. The Course is an extreme example in some ways, but the experience with its translation is a demonstration of why today's notions of translation are completely delusional, because they remain stuck in the notion of transforming sets of symbols into one another, and exclude the intervening step of comprehension in the process.
This week I used one of today's most popular CAT tools, Wordfast Anywhere, which interestingly was originally developed by one Yves Champollion, who is a descendant of the other Champollion. Evidently it is a powerful tool, if it is used right, but the question is if there are any 'right' uses of these tools. To illustrate that question: yesterday I found out that in Holland there is a TV quiz-program which is based on quotes from user manuals of appliances, and the contestants are supposed to guess what the appliance is, and they almost never can. That is probably a testimony to the state of computer-based and computer-assisted translation.
While we see people who can't spell stumble with spellcheckers because of homophones, translation memory, and CAT cause worse problems when they are misused, and it is harder to use them right than to misuse them. I have related my experience with a translation agency a few years ago. In that case it was a very prosaic project - a quote for the electrical installation in the residence of the American ambassador in The Hague. The first translator supposedly had been a professor of linguistics who taught Dutch at some college in NY, and according to the agency she had high proficiency with CAT. My guess was that the translator was Turkish, and had learned Dutch and English from a correspondence course. Almost all the terms of the trade were absurdly wrong, such as a ground fault interrupt circuit, which became something like and 'earth leak switch' in a literal translation from the Dutch. Those are exactly the things where a TM is supposed to help you, but again, absent an understanding of the topic, total nonsense will result. I ended up redoing the translation entirely, and correctly, and in record time, such that the agency asked me how I could do it so fast, and I told them that it was because I was not using the CAT tools which they had initially demanded I use.
Still, I continue to hold out hope for better tool sets for serious translation, the first requirement of which would be the complete elimination of any attempt at computer translation, and an understanding that this is not how the process works. Any meaningful assistance must come from better access to more and better dictionaries, grammars and so on, to support translators who are actually capable of thinking in both languages. And then if they also have some mastery of their subject matter, perhaps there is hope. In the meantime I raised my translation rates to 2-3 times higher with CAT tools than without.
Copyright © 2012 Rogier F. van Vlissingen. All rights reserved.
Labels:
ACIM,
Bible,
CAT,
Champollion,
content,
form,
Parables,
Plato,
translation,
Wordfast
Monday, July 06, 2009
Babylon V8
As reported here earlier, I've become enamored with Babylon translation support software, and the new version V8.0 is definitely a worthwhile improvement, if you think of it as a support tool for the translator.
For better or for worse, the company continues to include machine translation also, which is patently ridiculous and pointless.
Here is what I wrote to them:
quote (slightly edited from the original letter to the company).
I really don't understand why your company would be still keeping that stupid "translate" function in there. It is so mind-numbingly stupid, that it ceases to be funny after three tries.
Anyone who would understand what language even is, should understand that this is a categorical and structural impossibility to ever achieve. EVER. Language processing does not work in the way computer scientists seem to imagine - and I say this having read more computer science than the average Ph.D. The fundamental thinking error is to think that the mind and comprehension proceeds from the concrete to the abstract, when clearly the reverse is the case. This same issue is the underlying error behind all the nonsense about artificial intelligence. Just because a computer needs to synthesize the abstract from the concrete, and is therefore essentially structurally incompetent in higher level operations, so also it can never produce language, never mind how intricate the languages for programming it, because again of the unavoidable, fundamental flaw of having to deduce the abstract from the concrete, which is a process in which there is no conformal mapping, if the missing component of human experience is missing, which is the only conceivable guarantee that a translation is a translation and not gibberish.
Seen from this viewpoint, words don't make meaning, meaning makes words, or rather finds expression in words. Words evoke that meaning. Translation is to a) understand, and b) render into another language. This means the mind has to go up to the abstract level, and then descend again into the concreteness of another language, in proper idiom.
There is not even the remotest possiblity of a mathematical transform, or a logical transformative process on the phenomenological level of language that could ever produce a meaningful translation, except of something so trivial that you would not need a translation anyway. Like I said above, this whole notion is categorically absurd. And to persist in it, makes your company look stupid.
It sounded pretty smart when you talked about tools for translators. The translate function does not belong.
unquote
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
For better or for worse, the company continues to include machine translation also, which is patently ridiculous and pointless.
Here is what I wrote to them:
quote (slightly edited from the original letter to the company).
I really don't understand why your company would be still keeping that stupid "translate" function in there. It is so mind-numbingly stupid, that it ceases to be funny after three tries.
Anyone who would understand what language even is, should understand that this is a categorical and structural impossibility to ever achieve. EVER. Language processing does not work in the way computer scientists seem to imagine - and I say this having read more computer science than the average Ph.D. The fundamental thinking error is to think that the mind and comprehension proceeds from the concrete to the abstract, when clearly the reverse is the case. This same issue is the underlying error behind all the nonsense about artificial intelligence. Just because a computer needs to synthesize the abstract from the concrete, and is therefore essentially structurally incompetent in higher level operations, so also it can never produce language, never mind how intricate the languages for programming it, because again of the unavoidable, fundamental flaw of having to deduce the abstract from the concrete, which is a process in which there is no conformal mapping, if the missing component of human experience is missing, which is the only conceivable guarantee that a translation is a translation and not gibberish.
Seen from this viewpoint, words don't make meaning, meaning makes words, or rather finds expression in words. Words evoke that meaning. Translation is to a) understand, and b) render into another language. This means the mind has to go up to the abstract level, and then descend again into the concreteness of another language, in proper idiom.
There is not even the remotest possiblity of a mathematical transform, or a logical transformative process on the phenomenological level of language that could ever produce a meaningful translation, except of something so trivial that you would not need a translation anyway. Like I said above, this whole notion is categorically absurd. And to persist in it, makes your company look stupid.
It sounded pretty smart when you talked about tools for translators. The translate function does not belong.
unquote
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
Saturday, May 16, 2009
Progress with Translation...
Some time ago I wrote here about Babylon Software, which I'm starting to like a lot, particularly because it is a Swiss-army style tool for translators. Extremely flexible. And the other thing I like is that this company does understand that machine translation is a stupid idea, which only a total geek could think of, who thinks the mind was modeled on a computer chip in lieu of the other way around. Once you get that point, it's obvious how true it is that a computer is a stupid instrument that just under certain conditions can execute certain instructions more efficiently than we can, but only a total moron would mistake this for thinking. So expressions like machine translation and artificial intelligence are a permanent oxymoron, just like smart bombs, or military intelligence.
So, as a company, Babylon is now addressing themselves to making tools to address the needs of translators. For some reason they did retain their machine translation as an option, and I can recommend it anytime you need comic relief. It will give you the same sorts of assinine results as the translation function on Yahoo - another group of nerds who don't get it.
You can configure it anyway you wish, and I like to have both dictionaries, (meaning, synonyms, etc.) in the language of origin, as well as a translation dictionary, and this gives you the tools you need at your fingertips.
The free dictionaries that come with the system are worth what you paid for them, but the actual dictionaries are not too expensive. I am beginning to find this tool indispensable. But, more important than the good dictionaries that are available for the system, and the flexibility of configuring it so they come up to your liking, the ability to customize this system is indispensable. For example I have one or two authors I'm working on whose vocabulary is so unique, that I'm developing my own concordance to their work, as a backbone to a multi-year translation project, using the Babylon Glossary Builder. Considering all of this, this product is indeed remarkable, and if I use it long enough it will become indispensable, especially for these very specialized projects.
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
So, as a company, Babylon is now addressing themselves to making tools to address the needs of translators. For some reason they did retain their machine translation as an option, and I can recommend it anytime you need comic relief. It will give you the same sorts of assinine results as the translation function on Yahoo - another group of nerds who don't get it.
You can configure it anyway you wish, and I like to have both dictionaries, (meaning, synonyms, etc.) in the language of origin, as well as a translation dictionary, and this gives you the tools you need at your fingertips.
The free dictionaries that come with the system are worth what you paid for them, but the actual dictionaries are not too expensive. I am beginning to find this tool indispensable. But, more important than the good dictionaries that are available for the system, and the flexibility of configuring it so they come up to your liking, the ability to customize this system is indispensable. For example I have one or two authors I'm working on whose vocabulary is so unique, that I'm developing my own concordance to their work, as a backbone to a multi-year translation project, using the Babylon Glossary Builder. Considering all of this, this product is indeed remarkable, and if I use it long enough it will become indispensable, especially for these very specialized projects.
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
Saturday, February 28, 2009
Translation Software Revisited - Babylon
Some time ago I had some funny experiences with a translation bureau, who would not hire me as a translator, because I did not use all kinds of translation software tools, which I despise. But then they did hire me to revise a translation of one of their "qualified translators" which was a complete disaster. In fact I laughed so hard while doing it, that I suggested to them that the translator in question probably was a Turkish national who had taken some correspondence courses in English and Dutch, and who now figured they were a translator. Evidently not so. According to the company, said "qualified translator" was very qualified indeed, the person was a language professor of Dutch extraction at an American college, who also would use all the software tools the company required its translators to use. It ended up that I did it faster without the tools, than the original translator did in with the tools, and I made a lot fewer errors, not to mention that the errors could have been costly, since the project was a specification for an electrical installation.
Thankfully for their customers, this particular company went out of business.
Most of the issues related to a problem we all see regularly day to day, when people who can't spell use a spelling checker, and they end up accepting homophone words in the wrong places, such as saying "then" for "than," etc.
At long last I've decided to test some of these tools anyway, in particular because I bumped into one company which seems to get it that they should focus on helping translators to do their job better, not replacing them. From a description their program sounded impressive, and they are offering me a chance to use their software as part of an ongoing evaluation.
Babylon.com initiated a Translator Outreach Program emphasizing that no machine translation can be a substitute for the experienced human translator. It is Babylon’s aim to reposition its translation software among professional translators. Despite their slogan “translation @ a click”, aiming at Babylon’s general users and describing the ease of use of its software by simply clicking on a word to receive a translation, it is important for Babylon to recall the essence of its software: the fast and efficient look up in many dictionaries, which – as Babylon is convinced – also is of great benefit to professional translators. Babylon Ltd. Is inter alia provider of English Dutch translation and Dutch English translation solutions.
Translators that would like to join the Babylon Outreach Program for Translators can do so at: http://www.babylon-blog.com/translator-outreach/ and will receive a free annual license of the Babylon software.
So, I'll start on this evaluation with all the skepticism I can muster, and then I will report back in this spot after I have some experience. Right now I'm at the end (ca. 7/8ths) of one 300,000 word project, and halfway through another 80,000 word project, during all of which I've probably consulted a dictionary 25 times, and asked a translation group on-line about 5 times for help with a difficult phrase. The rest has been subtleties requiring human interaction and careful weighing of very contextual stuff, as well as simply stupid mistakes in terms of employing English turns of phrase in Dutch, the kind of a mistake that software makes more likely, not less so.
One thing I've also learned the hard way from this project was that correcting a really bad translation, as I did on one book, is far worse than doing a fresh translation from scratch. This is why machine translations will cost you time, not save you time. However, if this tool can add convenience, great. Given the stats above it won't be about time savings, but the convenience could still be worth it.
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
Thankfully for their customers, this particular company went out of business.
Most of the issues related to a problem we all see regularly day to day, when people who can't spell use a spelling checker, and they end up accepting homophone words in the wrong places, such as saying "then" for "than," etc.
At long last I've decided to test some of these tools anyway, in particular because I bumped into one company which seems to get it that they should focus on helping translators to do their job better, not replacing them. From a description their program sounded impressive, and they are offering me a chance to use their software as part of an ongoing evaluation.
Babylon.com initiated a Translator Outreach Program emphasizing that no machine translation can be a substitute for the experienced human translator. It is Babylon’s aim to reposition its translation software among professional translators. Despite their slogan “translation @ a click”, aiming at Babylon’s general users and describing the ease of use of its software by simply clicking on a word to receive a translation, it is important for Babylon to recall the essence of its software: the fast and efficient look up in many dictionaries, which – as Babylon is convinced – also is of great benefit to professional translators. Babylon Ltd. Is inter alia provider of English Dutch translation and Dutch English translation solutions.
Translators that would like to join the Babylon Outreach Program for Translators can do so at: http://www.babylon-blog.com/translator-outreach/ and will receive a free annual license of the Babylon software.
So, I'll start on this evaluation with all the skepticism I can muster, and then I will report back in this spot after I have some experience. Right now I'm at the end (ca. 7/8ths) of one 300,000 word project, and halfway through another 80,000 word project, during all of which I've probably consulted a dictionary 25 times, and asked a translation group on-line about 5 times for help with a difficult phrase. The rest has been subtleties requiring human interaction and careful weighing of very contextual stuff, as well as simply stupid mistakes in terms of employing English turns of phrase in Dutch, the kind of a mistake that software makes more likely, not less so.
One thing I've also learned the hard way from this project was that correcting a really bad translation, as I did on one book, is far worse than doing a fresh translation from scratch. This is why machine translations will cost you time, not save you time. However, if this tool can add convenience, great. Given the stats above it won't be about time savings, but the convenience could still be worth it.
Copyright © 2009 Rogier F. van Vlissingen. All rights reserved.
Saturday, May 10, 2008
The Thomas Edison Effect
For anyone who has watched the development of computing, it is evident that the experience of vision and invention constantly must remind us of Thomas Edison, and his winning attitude that with every failed design the working light bulb came closer, which eventually proved to be correct.
In my own modest way I was able to see some of my visions realized when I was leading the design strategic IT systems for a shipping company, and even later I found out that some other designs of mine which were prototyped in the late 80's ended up being implemented nearly thirteen years after my departure from the scene.
Since the end of 2000 it seems I've been practicing bad timing, or so it seems for I got into the Internet game just when it was ending, working for what then seemed to be a promising ISP, Verio, just having been acquired by NTT of Japan, to form NTT/Verio. The most I got out of that was that they paid for a year of Japanese studies at the Japan institute in Manhattan, for the rest, I got to watch from the inside the collapse of the dot.com boom, which was then happening all around us and to us, and projects were going nowhere. In 2001 I got involved in the launch of a managed security service, with Riptech, later to be acquired by Symantec, to form Symantec Managed Security. The events of 9/11 squashed the fun. By late summer I had built up an interesting order book, but after 9/11 the market disappeared for about two years, and NTT/Verio was going through spasmodic RIFs every three months or so for all the time I spent with them, and by March 2002 my turn had come.
Regardless of all the frustrations, it was a fruitful period of exploring a lot of ideas which had been on my mind for a long time. It was during this time that my thinking about serious on-line collaboration began to take form, along with the fact that personal computers were such an obvious security disaster, that it should be possible to actually organize greater security on-line than a "personal computer," which from a security standpoint is leakier than a sieve. One of the first companies that I found really inspiring in that regard was www.safe-mail.net, which is still in operation today. The magic of their infrastructure, which is pretty flexible, is that they have a built in, fully automated deployment of PKI, guaranteeing the integrity of communications within the domain. On the other hand the fallacy then is that in their standard consumer offering they provide access just based on username and password, which is inadequate identification for any type of secure transactions, but the potential exists to integrate secure identification, which today is available in many flavors.
At a later time, I began this blog simply to vent some of my ideas which I had not been able to realize, and that in turn led to a number of exchanges which some day may become fruitful.
Fundamentally I think that the whole thing about web 2.0 now makes it even more critically important to develop serious solutions for on-line personal workspace, which should be designed to provide better security than the physical world does, your PC in particular. The on-line world cannot offer us serious solutions as long as it exposes us to needless security risks, such as the avalanche of identity theft which is now going on. For the time being web2.0 is mostly driven by the ad-supported business paradigm, which seems to have become gospel, because of the evident success of Google, to such a degree that even Microsoft in its desperation is now working hard to compete with Google. Competition is good, if nothing else as a gauge to measure one's own progress, but any time businesses become obsessed with their competitors it usually spells trouble, for it indicates that they are in doubt about their own identity or mission. The effort is all about getting better returns from advertising, and the user experience is only the means to that end, which does not bode well for the user experience in the long term.
What is needed as a vision is an understanding that security, privacy are an asset, not a liability, and that the mission of online services should be to solve a customers problems, without giving them additional liabilities they did not have before. Consumer resistance to on-line payments is substantial as a result, because many consumers walk away when they feel their security and privacy is being threatened by the all around negligence of the on-line culture of the moment. In our future therefore the real solutions that will arise, which will form durable on-line businesses will need to be worth paying for. The ad-supported model leads companies to chase fads, and to permanently sacrifice the customers security and privacy, for ease of use, convenience, not to mention data mining, which remains an invasion of privacy no matter how you slice it. And just because the world seems to be in denial about it, does not mean the customer has lost their senses, and is not aware of it. There just don't seem to be many alternatives right now, although solutions like Safe-Mail play into this sentiment.
I believe that a fundamental analytical insight is that communication is not complete without a financial transaction capability, which is what remains one of the weak links on-line, and getting weaker by the minute with every theft of credit card numbers that are stolen. It seems to be a miracle that there are any left that have not been stolen. The mission is an integrated work environment which makes my on-line life a viable solution to the practical restrictions of the physical world, but as long as it increases my risk, with new and unacceptable exposures, it condemns itself to being a faddish and unstable business.
For any naive reader who thinks I'm too pessimistic about the current situation, think again, just now in the June 2008 issue of PC World, a columnist seriously recommends doing your on-line banking on a cell phone, since as of yet they have fewer security problems, and in another column I'm reading, the author suggest not entrusting ones medical data to either Google or Microsoft, unless and until there are laws to protect us. So, on the whole companies are their own worst enemies by taking the user for granted.
Copyright © 2008 Rogier F. van Vlissingen. All rights reserved.
In my own modest way I was able to see some of my visions realized when I was leading the design strategic IT systems for a shipping company, and even later I found out that some other designs of mine which were prototyped in the late 80's ended up being implemented nearly thirteen years after my departure from the scene.
Since the end of 2000 it seems I've been practicing bad timing, or so it seems for I got into the Internet game just when it was ending, working for what then seemed to be a promising ISP, Verio, just having been acquired by NTT of Japan, to form NTT/Verio. The most I got out of that was that they paid for a year of Japanese studies at the Japan institute in Manhattan, for the rest, I got to watch from the inside the collapse of the dot.com boom, which was then happening all around us and to us, and projects were going nowhere. In 2001 I got involved in the launch of a managed security service, with Riptech, later to be acquired by Symantec, to form Symantec Managed Security. The events of 9/11 squashed the fun. By late summer I had built up an interesting order book, but after 9/11 the market disappeared for about two years, and NTT/Verio was going through spasmodic RIFs every three months or so for all the time I spent with them, and by March 2002 my turn had come.
Regardless of all the frustrations, it was a fruitful period of exploring a lot of ideas which had been on my mind for a long time. It was during this time that my thinking about serious on-line collaboration began to take form, along with the fact that personal computers were such an obvious security disaster, that it should be possible to actually organize greater security on-line than a "personal computer," which from a security standpoint is leakier than a sieve. One of the first companies that I found really inspiring in that regard was www.safe-mail.net, which is still in operation today. The magic of their infrastructure, which is pretty flexible, is that they have a built in, fully automated deployment of PKI, guaranteeing the integrity of communications within the domain. On the other hand the fallacy then is that in their standard consumer offering they provide access just based on username and password, which is inadequate identification for any type of secure transactions, but the potential exists to integrate secure identification, which today is available in many flavors.
At a later time, I began this blog simply to vent some of my ideas which I had not been able to realize, and that in turn led to a number of exchanges which some day may become fruitful.
Fundamentally I think that the whole thing about web 2.0 now makes it even more critically important to develop serious solutions for on-line personal workspace, which should be designed to provide better security than the physical world does, your PC in particular. The on-line world cannot offer us serious solutions as long as it exposes us to needless security risks, such as the avalanche of identity theft which is now going on. For the time being web2.0 is mostly driven by the ad-supported business paradigm, which seems to have become gospel, because of the evident success of Google, to such a degree that even Microsoft in its desperation is now working hard to compete with Google. Competition is good, if nothing else as a gauge to measure one's own progress, but any time businesses become obsessed with their competitors it usually spells trouble, for it indicates that they are in doubt about their own identity or mission. The effort is all about getting better returns from advertising, and the user experience is only the means to that end, which does not bode well for the user experience in the long term.
What is needed as a vision is an understanding that security, privacy are an asset, not a liability, and that the mission of online services should be to solve a customers problems, without giving them additional liabilities they did not have before. Consumer resistance to on-line payments is substantial as a result, because many consumers walk away when they feel their security and privacy is being threatened by the all around negligence of the on-line culture of the moment. In our future therefore the real solutions that will arise, which will form durable on-line businesses will need to be worth paying for. The ad-supported model leads companies to chase fads, and to permanently sacrifice the customers security and privacy, for ease of use, convenience, not to mention data mining, which remains an invasion of privacy no matter how you slice it. And just because the world seems to be in denial about it, does not mean the customer has lost their senses, and is not aware of it. There just don't seem to be many alternatives right now, although solutions like Safe-Mail play into this sentiment.
I believe that a fundamental analytical insight is that communication is not complete without a financial transaction capability, which is what remains one of the weak links on-line, and getting weaker by the minute with every theft of credit card numbers that are stolen. It seems to be a miracle that there are any left that have not been stolen. The mission is an integrated work environment which makes my on-line life a viable solution to the practical restrictions of the physical world, but as long as it increases my risk, with new and unacceptable exposures, it condemns itself to being a faddish and unstable business.
For any naive reader who thinks I'm too pessimistic about the current situation, think again, just now in the June 2008 issue of PC World, a columnist seriously recommends doing your on-line banking on a cell phone, since as of yet they have fewer security problems, and in another column I'm reading, the author suggest not entrusting ones medical data to either Google or Microsoft, unless and until there are laws to protect us. So, on the whole companies are their own worst enemies by taking the user for granted.
Copyright © 2008 Rogier F. van Vlissingen. All rights reserved.
Subscribe to:
Posts (Atom)