All 8 entries tagged Single Sign On

View all 20 entries tagged Single Sign On on Warwick Blogs | View entries tagged Single Sign On at Technorati | View all 1 images tagged Single Sign On

December 14, 2005

Creating a Java KeyStore (JKS) with an existing key

We are using a lot more SSL than we used to in e-lab because of the new super powerful and secure Single Sign On system. This means we need to programatically access SSL keys and certificates with Java. If you just want to create a new key and use it in Java, you just create a Java KeyStore with the keytool program. However, if you want to use the key and certificate that you already had, things are a little trickier.

I came up with this little unix shell script which should make life easier:

host=$1
storepass=$2
echo Creating keystore for ${host}
certFile=${host}.crt
keyFile=${host}.key
echo "Creating pkcs12 file from $certFile and $keyFile"
openssl pkcs12 -export -in $certFile -inkey $keyFile -out ${host}.pkcs12
-name ${host} -passout pass:$storepass
java -classpath . KeystoreKeyImporter ${host}.pkcs12 $storepass ${host}.keystore $storepass
Basically you run:
importscript.sh myhostname.com mypass

It will look for an existing myhostname.com.key and myhostname.com.crt and turn them into myhostname.com.pkcs12 which is then imported into myhostname.com.keystore with the KeystoreKeyImporter java program.

public class KeystoreKeyImporter {

public static void main(String[] args) throws Exception {

if (args.length < 4) {
System.out.println("Usage: KeystoreKeyImporter <inputpkcs12.file> <inputpkcs12.pass>
 <outputkeystore.file> <outputkeystore.pass>");
return;
}

String pkcs12Location = args[0];
String pkcs12Password = args[1];

String keystoreLocation = args[2];
String keystorePassword = args[3];


// openssl pkcs12 -export -in test.crt -inkey test.key.nopass 
                //    -out test.pkcs12 -name test

KeyStore kspkcs12 = KeyStore.getInstance("PKCS12");

String alias = null;

FileInputStream fis = new FileInputStream(pkcs12Location);
kspkcs12.load(fis, pkcs12Password.toCharArray());
if (kspkcs12.aliases().hasMoreElements()) {
System.out.println("Has keys!");
Enumeration aliases = kspkcs12.aliases();
while (aliases.hasMoreElements()) {
alias = (String) aliases.nextElement();
System.out.println("Alias:" + alias);
Key key = kspkcs12.getKey(alias,pkcs12Password.toCharArray());
if (key == null) {
System.out.println("No key found for alias: " + alias);
System.exit(0);
}

System.out.println("Key:" + key.getFormat());
Certificate cert = kspkcs12.getCertificate(alias);
if (cert == null) {
System.out.println("No certificate found for alias: " + alias);
System.exit(0);
}
System.out.println("Cert:" + cert.getType());
}
} else {
System.out.println("No keys!");
}


KeyStore ksjks = KeyStore.getInstance("JKS");
ksjks.load(null,keystorePassword.toCharArray());
Certificate c[] = kspkcs12.getCertificateChain(alias);
Key key = kspkcs12.getKey(alias, pkcs12Password.toCharArray());

ksjks.setKeyEntry(alias, key, keystorePassword.toCharArray(), c);
ksjks.store(new FileOutputStream(keystoreLocation), keystorePassword.toCharArray());

System.out.println("Created " + keystoreLocation);

}

}

You now have a nice JKS with your key and certificate in it.


December 09, 2005

Serializing java objects to Oracle

We recently had a requirement to use our new Shibboleth based Single Sign On system with a cluster of jboss servers running an essentially stateless application.

The way that our new SSO works is through the SAML Post Profile meaning that an authentication assertion is posted by the user to the Shire service. This shire service then does an Attribute Request back to SSO and puts the results into a user cache in memory and generates a cookie which links to the user in the cache.

The problem is that the request might then go back to another member of the cluster which does not share the cache so it won't know about the user represented by the cookie. The obvious solution is some kind of clustered cache.

We've not needed to use any clustered cache technology before so passed on the likes of Coherence (insane pricing) and other open source caches such as memcached. It is best not to introduce new technologies that you can't support unless you have to.

I ended up building a simple two level cache that put the data both in memory and in the database. If when a request came in, there was nothing in the memory cache, it checked the database and populated the memory cache. I wouldn't want to go to the database everytime as this is a very busy application that could do without the additional overhead.

Now, the code.

ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos;
try {
oos = new ObjectOutputStream(baos);
oos.writeObject(value);
} catch (IOException e) {
throw new RuntimeException("Could not write object to stream",e);
}

SqlUpdate su = new SqlUpdate(getDataSource(), "INSERT INTO objectcache " + "(key, objectdata,createddate) "
+ "VALUES (?, ?,?)");
su.declareParameter(new SqlParameter("key", Types.VARCHAR));
su.declareParameter(new SqlParameter("objectdata", Types.BLOB));
su.declareParameter(new SqlParameter("createddate", Types.DATE));
su.compile();

Object[] parameterValues = new Object[3];
parameterValues[0] = key.toString();

LobHandler lobHandler = new DefaultLobHandler();
parameterValues[1] = new SqlLobValue(baos.toByteArray(), lobHandler);

parameterValues[2] = new java.sql.Date(new Date().getTime());

su.update(parameterValues);
Not knowing how big these objects were going to be, I figured it would be best to put this in a blob, but that has its own joys, especially with plain old JDBC. I used Spring's very handy JDBC helpers to make my life easier. If you want to get the object back out:
ObjectInputStream ois = new ObjectInputStream(new DefaultLobHandler().getBlobAsBinaryStream(resultSet, 1));
UserCacheItem dbItem = (UserCacheItem) ois.readObject();
return dbItem;
Basically just select back the object and use the ObjectInputStream to de-serialize the object back into existence. Simple.

December 08, 2005

Is 99.999% uptime only for Wal–Mart?

Writing about web page http://37signals.com/svn/archives2/dont_scale_99999_uptime_is_for_walmart.php

I've linked to an article on 37 Signals blog that talks about uptime for web applications. They state that you only need to worry about 99.999% uptime once you're doing big business.


Wright correctly states that those final last percent are incredibly expensive. To go from 98% to 99% can cost thousands of dollars. To go from 99% to 99.9% tens of thousands more. Now contrast that with the value. What kind of service are you providing? Does the world end if you’re down for 30 minutes?

If you’re Wal-Mart and your credit card processing pipeline stops for 30 minutes during prime time, yes, the world does end. Someone might very well be fired. The business loses millions of dollars. Wal-Mart gets in the news and loses millions more on the goodwill account.

Now what if Delicious, Feedster, or Technorati goes down for 30 minutes? How big is the inconvenience of not being able to get to your tagged bookmarks or do yet another ego-search with Feedster or Technorati for 30 minutes? Not that high. The world does not come to an end. Nobody gets fired.

Having a quick look at our wonderful IPCheck software, these are our values for the last 3 months.

  • BlogBuilder: 99.70% (5h40m downtime)
  • SiteBuilder: 99.93% (24m downtime)
  • Forums: 98.97% (27h downtime)
  • Single Sign On: 99.89% (1h43m downtime)

Whose fault that 0.30%, 0.07%, 1.03% and 0.11% are, it doesn't matter, sometimes things are just slow rather than down, sometimes things just break, sometimes it's the network, sometimes it's human error doing a redeploy. All our users see is that it is down for some small period of time. In many cases the system is not actually down, it is just that a single request from the monitoring server failed…but to be fair, if that happens, the chances are that occasionally it will happen to a use without the monitor noticing either.

This is just a small selection (but of the most commonly used systems we monitor), but you can see that we have good uptime. Would it matter if we were a couple of percentage points lower? As always…it depends.

If Single Sign On was down for an hour on a single Monday morning and that was the only downtime that month, it'd look like a fantastic month of 99.9% uptime. Unfortunately many systems rely on SSO and you would in some way at least degrade if not bring down completely all those other systems, adding up to a very nasty bit of downtime.

The 37 Signals article is correct that you do have to spend quite a bit of money to get that extra percentage point, but in the environment we work in where so many people come to rely on our services, it is important.

If however you need the occasional planned downtime and you can let everyone know, that is fine as people can make other plans, so pure uptime is not always important, it is keeping the unplanned downtimes to a minimum that counts.


October 12, 2005

LDAP connection pooling

We recently had problems with load on our single sign on (SSO) server. Being the start of term, things are generally busier than the rest of the year and we often see higher load than normal. However, this was too far from normal to be right.

A bit of investigation showed that our JBoss instance had literally 100s and 100s of threads. Lsof is a very handy utility in cases like this.

lsof -p <procid>

This revealed 100s of open connections to our LDAP servers. Not good.

Looking at the LDAP code we have, there are two places where we make LDAP connections, or as they are known in Java; contexts.

Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, 
    "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://ourldap.warwick.ac.uk");
LdapContext ctx = new InitialLdapContext(env,null);
// do something useful with ctx
ctx.close()

This is pretty much how our code worked in both places. Importantly I'd checked that the contexts were always closed…and they were.

This is where LDAP connection pooling came into the picture. It turned out that one piece of code (not written by us), used this:

env.put("com.sun.jndi.ldap.connect.pool", "true");

This turns on connection pooling. However, we didn't use pooling in the other bit of code. So, one of the other wasn't working. Trying out pooling on both bits of code didn't improve things either, basically because it is a multi–threaded application with 100's of requests a minute, if you just keep creating new LdapContext's from a LdapCtxFactory, you are using a new LdapCtxFactory every time.

Thankfully our SSO application uses Spring so it was simple enough to create an XML entry for the LdapCtxFactory and the environment config and plug the same LdapCtxFactory into the two places it was needed. At least now we were using the same factory.

We could now do this:

Map env = new Hashtable();
env.putAll(getLdapEnv());
env.put("java.naming.security.principal", user);
env.put("java.naming.security.credentials", pass);
LdapContext ldapContext = (LdapContext) getLdapContextFactory().getInitialContext((Hashtable) env);

Where the base LDAP environment and LdapCtxFactory was injected into where it was needed. Then just the username and password to bind as is passed in dynamically.

To really know if pooling is working you need to turn on the debugging for the ldap connection pooling by adding a java option to your test/application/server. There are other handy options for tweaking the pooling behaviour as well.

-Dcom.sun.jndi.ldap.connect.pool.debug=fine
-Dcom.sun.jndi.ldap.connect.pool.initsize=20 -Dcom.sun.jndi.ldap.connect.pool.timeout=10000

The bugging will give you messages like this if pooling isn't working:

Create com.sun.jndi.ldap.LdapClient@c87d32[nds.warwick.ac.uk:389]
Use com.sun.jndi.ldap.LdapClient@c87d32
Create com.sun.jndi.ldap.LdapClient@c81a32[nds.warwick.ac.uk:389]
Use com.sun.jndi.ldap.LdapClient@c81a32
Create com.sun.jndi.ldap.LdapClient@a17d35[nds.warwick.ac.uk:389]
Use com.sun.jndi.ldap.LdapClient@a17d35
Create com.sun.jndi.ldap.LdapClient@1a7e35[nds.warwick.ac.uk:389]
Use com.sun.jndi.ldap.LdapClient@1a7e35

New connections are just being created every time with no reuse. What you should see is:

Use com.sun.jndi.ldap.LdapClient@17bd5d1
Release com.sun.jndi.ldap.LdapClient@17bd5d1
Create com.sun.jndi.ldap.LdapClient@cce3fe[nds.warwick.ac.uk:389]
Use com.sun.jndi.ldap.LdapClient@cce3fe
Release com.sun.jndi.ldap.LdapClient@cce3fe
Use com.sun.jndi.ldap.LdapClient@1922b38
Release com.sun.jndi.ldap.LdapClient@1922b38
Use com.sun.jndi.ldap.LdapClient@17bd5d1
Release com.sun.jndi.ldap.LdapClient@17bd5d1

As you can see, there are actually two differences here from a fully working connection pool and a well and truely broken one.

  1. There are very few creates and lots of reuse in the good code
  2. There are lots of releases after connection use in the good code

This is where we came across our second problem. Although in theory the connection pooling was working and I could see some reuse, it was still creating a lot of connections and I wasn't seeing barely any 'Release' messages.

Chris hit the nail on the head with pointing out that NamingEnumerations could well be just like PreparedStatements and ResultSets for JDBC. It is all fine and well closing the connection/context itself, but if you don't close the other resources, it won't actually get released.

The proof of this shows up again in lsof or netstat. A context that has been closed but still has an open NamingEnumeration shows up like this:

java    21533 jboss   80u  IPv6 0x32376e2cf70   0t70743    TCP ssoserver:60465->ldapserver.warwick.ac.uk:ldap (ESTABLISHED)

However, when it is closed, it should wait to be closed, like this:

java    21533 jboss   80u  IPv6 0x32376e2cf70   0t70743    TCP ssoserver:60465->ldapserver.warwick.ac.uk:ldap (TIME_WAIT)

Upon closing all NamingEnumerations, we finally got the perfect results. 100s of requests a minute and only ever around 10–15 ldap connections open at any one time.

So, lessons learnt.

  • When creating contexts, share the factory to use pooling
  • Make sure you close everything. If it has a close()...use it!
  • Occasionally take a look at the open connections and threads that you application has…it might surprise you.

Update:

Spring config:


<bean id="ldapContextFactory" class="com.sun.jndi.ldap.LdapCtxFactory" singleton="true"/>

<bean id="ldapEnv" class="java.util.Hashtable">
<constructor-arg>
<map>
<entry key="java.naming.factory.initial"><value>com.sun.jndi.ldap.LdapCtxFactory</value></entry>
<entry key="java.naming.provider.url"><value>ldaps://ourldap.ac.uk</value></entry>
<entry key="java.naming.ldap.derefAliases"><value>never</value></entry>
<entry key="com.sun.jndi.ldap.connect.timeout"><value>5000</value></entry>
<entry key="java.naming.ldap.version"><value>3</value></entry>
<entry key="com.sun.jndi.ldap.connect.pool"><value>true</value></entry>
        </map>
</constructor-arg>
</bean>

Update:
We now do connection pooling with LDAPS so we use the additional system property:

-Dcom.sun.jndi.ldap.connect.pool.protocol="plain ssl"

June 28, 2005

Benefits of Single Sign On

Follow-up to SSO v3: Federated identity from Kieran's blog

Jeremy Smith has been blogging about his work at CASE a great deal over the last 6/7 months. It's great to read about someone in quite a similar situation to us here at Warwick. He's been working on blogs, single sign on, wikis and just generally thinking about the future of web services and development of systems for universities.

The Benefits of Single Sign On is his most recent article where he sums up quite nicely what is so great about SSO.

Speaking of which…we are very close now. With other distractions, I've not been able to dedicate 100% of my time to SSO over the last couple of months, but the end is near.

One big lesson learnt of the last few months…SSL is not your friend. Doing lots and lots of signing and encrypting XML, mutual SSL authentication and the like is a right royal pain. However, I have finally beaten it into submission and we are now looking pretty ready. There is of course lots of testing still to be done, testing out the conversion of systems from the old SSO to the new SSO (they don't have to be all changed at once, the new system is compatible with the old one, but the new SSO is a hell of a lot better in many ways), load testing and just general checking of my sanity.

The slightly depressing thing is actually that the average user will not really notice a huge difference in the way things work once we roll out the new system. The single biggest difference will be that you will get a full screen to login with rather than a little popup window in the future. But apart from that, the biggest changes are in the background.

For a lot of people though an advantage will be getting rid of your old Athens login/password. During the next academic year, more or less anyone with a Warwick username/password will be able to login to Athens resources with just that set of credentials just like they would into any other SSO protected resource at Warwick (SiteBuilder, Forums, Blogs, etc…).


April 11, 2005

SSO v3: Federated identity

Follow-up to The new project: SSO v3 from Kieran's blog

In recent years a lot of people have implemented Single Sign On (SSO) systems that work within their company or institution. However, once you leave your institution and start using the services of some external provider you generally lose your identity and have to register and login again with a different username and password.

However, with federated identity, you can share your identity with trusted partners. The perfect example of this (and the main driver for doing this) is the Athens service. Traditionally users had a different username and password that they received when they registered with IT Services which they could then use with Athens. Unfortunately not everyone knows this and those that do can easily enough forget/lose that information.

In the not too distant future we will start federating identities with Athens. This will mean that users login once at Warwick with their usual username and password and will automatically be logged in over at Athens (via our shiny new Shibboleth/SAML based SSO system). This clearly has some big advantages.

  • One username and password to remember (less helpdesk calls to retrieve them!)
  • We control the authentication and the release of information about our users to our Athens.
  • Athens don't have as much user administration to do

Importantly, because this is an open standard that is being adopted by lots of people it won't just be Athens that we can federate with.

Another example is in purchasing. We may well want it so that any staff member can order a hire car from a rental agency we have a partnership with. Instead of having to register everyone and maintain those registrations and all the user info, we set up a federated relationship whereby users login locally and we just tell the rental agency to let this user in because we vouch for who they are.

Once you get into the territory of having our users federated with lots of partners, two more advantages crops up.

  • In a single user creation action, allow a new user to login to many external systems without having to register them on each one
  • When a user leaves Warwick, we just disable their local login and immediately they can't login to any partner sites (no more worrying about tracking down and disabling all external accounts)

Whilst setting us up to be able to federate our users out to other partners, we are also setting it up so that our own services can accept users from other institutions who may want to federate their users. In theory each of our services could accept users from Birmingham University for instance, allowing for very simple cross institutional projects. Examples:

  • Setup a SiteBuilder site that allows external users to edit/read it
  • Create an learning resource that allows users from any federated University login and use it
  • Allow other Universities students to login and comment on blogs

You can of course do this now, but you'd have to register users locally and manage these users, once you federate identity, this isn't your problem. As long as you have a trusted relationship with your partners, you can just leave the user management up to them.


April 06, 2005

SSO v3: Scalability and manageability

Follow-up to The new project: SSO v3 from Kieran's blog

One of the primary reasons for wanting to create a new Single Sign On (SSO) system is so that we can work better with the much wider range of users and services that want to use SSO.

Some stats:
Servers and developer boxes registered to work with SSO: 64
Logins per day: 9,500
User login checks from services per day: 20,000 (implying each user uses 2 different services per day on average)
Warwick users: 20,000

So, around half of all users login each day to some SSO controlled service. Obviously for a lot of services you can use them without logging in, but you must login to do certain tasks. A lot of users reading pages in SiteBuilder, Blogs and Forums are not logged in (but will login to edit/comment/post).

To manage these numbers more effectively, we needed a more powerful and better compartmentalised system.

Our starting point was the Shibboleth Origin software from Athens. The team at Athens provide a Java based Shibboleth Origin that can be easily integrated into our systems (also Java). From this starting point, we've got up to speed relatively easily with the Shibboleth protocol.

One of the nicest features of the software is the security which uses mutual SSL authentication for more or less everywhere and signed SAML (XML) as the way of getting data securely around.

My other favourite feature is the ability to define which attributes of a user get sent to different client services. So if we only wanted to tell Athens your first name but not your email or full name, we could do so with a simple bit of configuration. But for our internal systems, we could easily configure it to send a lot more detailed and trusted information, thus providing a more secure and private and virtually anonymous way of authenticating to partner services. As long as the partner service trusts Warwick to vouch for someones identity, they will let that user in without knowing their real identity.


The new project: SSO v3

At Warwick we have had a web Single Sign On (SSO) system for a few years now. Not knowing our long term plans, we developed a simple but performant system that served at the time just a couple of applications we had on the web (SiteBuilder our CMS and our forums system).

Times have changed.

There are now the best part of 20 systems using our existing SSO system, everything from blogs to timetabling, printer credits to accommodation bookings. It works well and is easy to integrate with, but we are ready to move to the next level.

Why is it time to move on?

  1. Scalability and management of more client services and users
  2. Ability to federate identity to and from external partners
  3. Improve security

I've been working on this now for the last couple of months and it is almost ready. It should not affect most people except that you'll be presented with a different login screen which will most likely be a full screen rather than a popup.


September 2019

Mo Tu We Th Fr Sa Su
Aug |  Today  |
                  1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30                  

Tags

Search this blog

Most recent comments

  • One thing that was glossed over is that if you use Spring, there is a filter you can put in your XML… by Mathew Mannion on this entry
  • You are my hero. by Mathew Mannion on this entry
  • And may all your chickens come home to roost – in a nice fluffy organic, non–supermarket farmed kind… by Julie Moreton on this entry
  • Good luck I hope that you enjoy the new job! by on this entry
  • Good luck Kieran. :) by on this entry

Galleries

Not signed in
Sign in

Powered by BlogBuilder
© MMXIX