All 2 entries tagged Unicode

View all 4 entries tagged Unicode on Warwick Blogs | View entries tagged Unicode at Technorati | There are no images tagged Unicode on this blog

March 26, 2007

Java UTF–8 international character support with Tomcat and Oracle


I've spent the last few days looking at getting proper international character support working in our Files.Warwick application working.

At E-Lab we've never been that great at doing internationalisation support. BlogBuilder does a pretty good job of internationalisation as can be seen by quite a lot of our bloggers writing in Chinese/Korean/Japanese.

However, it's a bit of a cludge and doesn't work everywhere.

It didn't take long for someone to upload a file to Files.Warwick with an "é" in the file name. Due to our previous lack of thought in this area, this swiftly turned into a ? :( do you get your app to support international characters throughout?

What is international character support?

You'll hear all sorts of jargon regarding internationalisation support. Here is a little explanation of what it is all about.

What I do NOT mean is i18n support which is making the application support multiple languages in the interface so that you can read help pages and admin links in French or Chinese. What I mean by internationalisation support is being able to accept user input in any language or character set.

Tim Bray has a really good explanation of some of the issues surrounding ASCII/Unicode/UTF-8.

UTF-8 all the way through the stack

We need to look at UTF-8 support in the following areas:

  1. URLs
  2. Apache
  3. HTML
  4. Javascript
  5. POST data
  6. File download (Content-Disposition)
  7. JSPs
  8. Java code
  9. Tomcat
  10. Oracle
  11. File system

I'll go through each of these areas and explain how well they are supported by default and what changes you might need to make to support UTF-8 in each area.


URLs should only contain ASCII characters. The ASCII character set is quite restrictive if you want to use Chinese characters for instance, so there is some encoding needed here. So if you've got a file with a Chinese character and you want to link to it, you need to do this:

"中.doc" ->  "%E4%B8%AD.doc"

Thankfully this can be done with a bit of Java:"中.doc","UTF-8");

So, whenever you need to generate something for the address bar or a direct or something like that, you must URL encode the data. You don't have to detect this as it doesn't hurt to do this for links which are just plain old ASCII as they don't get changed, as you can see with the ".doc" ending on the above example.


Generally you don't need to worry about Apache as it shouldn't be messing with your HMTL or URLs. However, if you are doing some proxying with mod_proxy then you might need to have a think about this. We use mod_proxy to do proxying from Apache through to Tomcat. If you've got encoded characters in URL that you need to convert into some query string for your underlying app then you're going to have a strange little problem.

If you have a URL coming into Apache that looks like this:

http://mydomain/%E4%B8%AD.doc and you have a mod_rewrite/proxy rule like this:

RewriteRule ^/(.*) http://mydomain:8080/filedownload/?filename=$1 [QSA,L,P]

Unfortunately the $1 is going to get mangled during the rewrite. QSA (QueryStringAppend) actually deals with these characters just fine and will send this through untouched, but when you grab a bit of the URL such as my $1 here then the characters get mangled as Apache tries to do some unescaping of its own into ISO-8859-1, but it's UTF-8 not ISO-8859-1 so it doesn't work properly. So, to keep our special characters in UTF-8, we'll escape it back again.

RewriteMap escape int:escape
RewriteRule ^/(.*) http://mydomain:8080/filedownload/?filename=${escape:$1} [QSA,L,P]

Take a look at your rewrite logs to see if this is working.


HTML support for UTF-8 is good, you just need to make sure you set the character encoding properly on your pages. This should be as simple as bit of code in the HEAD of your page:

<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> 

You should be able to write out UTF-8 characters for real into the page without any special encoding. 


Javascript supports UTF-8 characters very well so as long as you don't use escape() then when your users enter characters, they shouldn't get mangled. We also use AJAX do do some functions in our application so you need to think about that as well but again, it should just work.

All of the above only holds true if you set the character encoding right on your surrounding HTML.

POST data

Getting POST datafrom the user in the right format is simple too. As long as your HTML has the right encoding then you should be ok.

File download (Content-Disposition) 

If you want to serve files for download from your app, as we obviously do with Files.Warwick then you'll need to understand how browsers deal with non ASCII characters in file names when downloading. Unfortunately the standard is not exactly well defined as no one really thought about UTF-8 file names until recently.

Internet Explorer supports URL encoded file names but Firefox supports a rather strange Base64 encoded value for high byte file names, so something like this should do the job:

String userAgent = request.getHeader("User-Agent");
String encodedFileName = null;

if (userAgent.contains("MSIE") || userAgent.contains("Opera")) {
encodedFileName = URLEncoder.encode(node.getName(), "UTF-8");
} else {
encodedFileName = "=?UTF-8?B?" + new String(Base64.encodeBase64(node.getName().getBytes("UTF-8")), "UTF-8") + "?=";

response.setHeader("Content-Disposition", "attachment; filename=\"" + encodedFileName + "\"");

Obviously you can tweak the user agent detection to be a bit smarter than this. 


UTF-8 support in JSPs is pretty much a one liner.

<%@ page language="java" pageEncoding="utf-8" contentType="text/html;charset=utf-8" %>

Include that at the top of every single JSP perhaps in a prelude.jsp file and you're away. 

Java code

As long as you source strings are properly encoded then generally you can rely on Java to keep your UTF-8 encoded input. However, be careful what String functions you perform on your UTF-8 data. Be sure to do things like this:

myStr.getBytes("UTF-8") rather than just myStr.getBytes()

If you don't then you'll most likely end up with ISO-8859-1 bytes instead. If for some reason you can not get your input data to be UTF-8, and it is coming in with a different encoding, you could do something like this to convert it to UTF-8:

String myUTF8 = new String(my8859.getBytes("ISO-8859-1"),"UTF-8")

Debugging can be fun with high byte characters as generally logging to a console isn't going to show you the characters you are expecting. If you did this:

System.out.println(new String(new byte[] { -28, -72, -83},"UTF-8")

Then you'd probably just see a ? rather than the Chinese character that it really should be. However, you can make log4j log UTF-8 messages. Just add 

<param name="Encoding" value="UTF-8"/>

To the appender in your log4j.xml config. Or this:


To your file. You might still only see the UTF-8 data properly if you view the log file in an editor/viewer that can view UTF-8 data (Windows notepad is ok for instance).


By default Tomcat will encode everything in ISO-8859-1. You can in theory override this by setting the incoming encoding of the HttpServletRequest to be UTF-8, but once some of the request is read, then the encoding is set, so chances are you might not be able to manually do:


early enough to have an effect. So instead you can tell Tomcat you want it to run in UTF-8 mode by default. Just add the following to the Connector you want UTF-8 on in your server.xml config file in Tomcat.


Not doing this has the fun quirk that if you have a request like this:


If you did request.getQueryString() you'd get the raw String that "highByte=%E4%B8%AD", but if you did request.getParameter("highByte") then you'd get the ISO-8859-1 encoded value instead which would not be right. Sigh.


You could just URL encode all of your data and put it into the database in ASCII like you always used to. However, that doesn't make for very readable data. There are two options here although I've only tried the one.

  1. Set the default character encoding of your Oracle database to be UTF-8. However, it is set on a per server basis, not a per schema basis so your whole server would be affected.
  2. Use NVARCHAR2 fields instead of VARCHAR2 fields and you can store real UTF-8 data.

We went for option 2 as we have a shared Oracle server. First of all, convert all fields that you want to store UTF-8 data in from VARCHAR2s to NVARCHAR2s. Be careful as I don't think you can change back!

You then need to tell your JDBC code somehow that it needs to send data that the NVARCHAR2 fields can undertand. There are a couple of ways of doing this too:

  1. Set the defaultNChar property on the connection to true.
  2. Use the setFormOfUse() method that is an Oracle specific extension to the PrepearedStatement

I went for option 1 as the problem with option 2 is that you have to somehow get at the Oracle specific connection or prepared statement within your Java code. This is not fun as you'll often be using a connection pool that will hide away these details.

Files system 

File system support of UTF-8 characters is again pretty good, but you are sometimes going to have issues with viewing the file listings. I just couldn't get a UTF-8 file name to display properly over a putty SSH connection. Through a simple Java test program, I could write and read back a UTF-8 file name on our Solaris 10 box, but all I could ever actually read when doing an "ls" was ?????.doc. So for the sake of maintainability of the file system I went for a URL encoded version of the file. This isn't ideal, but it works.


As you can see, there is quite a lot of work involved in supporting UTF-8 throughout. A lot of my time was spent researching as my understanding of encoding issues wasn't great. Now that I've put together this guide, I hope all of our apps can start to work towards full UTF-8 support.

Of course the above guide is quite specific to my experience in the app I was dealing with and the environment I work in so your experiences might be more or less painful :) 

February 24, 2006

Character encoding, Unicode and UTF–8

Writing about web page

When you're dealing with reading data from various sources and then end up doing some processing on it and display it on the web, most of the time you don't worry about character encoding. However, occasionally it comes along and bites you.

I always used to know that there were different character encodings and you could end up not displaying international characters properly if you used the wrong type and so on, but I didn't really know about it in depth. This is where good old Joel comes in. He wrote an article a while back entitled:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets
. He does a pretty good job of explaining things.

My specific problem was an international students name was coming out of our directory (NDS) like this H??hner. It turns out they are actually called Hühner. So that one character was being turned into ??. No good. Usually I just say "oh, some character encoding problem" and give up. But sadly I was determined to get to the bottom it. Upon closer inspection, the ?? were an artifact of appear on the web (different encoding again), but in my java code, their name was: H├╝hner. Nice.

Doing an ethereal trace on the traffic to my machine when I queried NDS for this person, I saw that:
48 e2 94 9c e2 95 9d 68 6e 65 72
seemed to represent our users name. This is hex and having a look at some character encoding charts, it turns out that this is UTF-8. Is there an easy way of fiddling about with different encoding in java…not that I can find. So, following the instructions on UTF-8 encodings from here I worked out that in Unicode that UTF-8 sequence is:
0x48, 0x251C, 0x255D, 0x68, 0x6e, 0x65, 0x72
Which does indeed turn into H├╝hner. So, nothing was wrong in my code and it proved that NDS was storing something obsure. Pleasingly, a quick email to our friendly systems team with this evidence and they got it fixed and are now going through the directory trying to fix bad entries and work out where this strange encoding is coming from. Hopefully our international students will soon no longer be seeing their names scrambled :)

Geek talk over.

June 2017

Mo Tu We Th Fr Sa Su
May |  Today  |
         1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30      


Search this blog

Most recent comments

  • One thing that was glossed over is that if you use Spring, there is a filter you can put in your XML… by Mathew Mannion on this entry
  • You are my hero. by Mathew Mannion on this entry
  • And may all your chickens come home to roost – in a nice fluffy organic, non–supermarket farmed kind… by Julie Moreton on this entry
  • Good luck I hope that you enjoy the new job! by on this entry
  • Good luck Kieran. :) by on this entry


Not signed in
Sign in

Powered by BlogBuilder