Gerry Gebel joins Axiomatics

19.02.2010 by Felix Gaehtgens

My friend Gerry Gebel, long time Burton Group analyst is joining Axiomatics to ramp up the company’s US presence. I received an email from him that started by saying “I thought I would give you a nice surprise on a Saturday morning”… and indeed what a surprise that was!

I can definitely understand Gerry’s choice for Axiomatics. The company is new, up and coming, full of very smart people and way ahead of everyone else in the area of authorisation/access management. Axiomatics comes at the top places in my own personal “favourite innovative companies” list, together with Unbound ID, the latter continuing to amaze me by their determination (and skill!) to redefine directory services from the ground up and “do it properly”. Both Axiomatics and Unbound ID will in the near future surely conquer the Identity Management world as we know it! OK joke aside ;-)

Speaking of Axiomatics, the timing (for me, personally) was actually quite interesting, as I have just finished a report on the company’s “Policy Server” and “Policy Auditor”. This is due to come up on our site within the next week. The report focuses on strengths and weaknesses of the products, the contexts in which it is most useful, the areas in which it is way ahead of its competitors and where it still needs to catch up.

I’ve also had the pleasure of doing a few Webinars (here and here) with Axiomatics and also interviewed Babak at last year’s EIC. So congratulations both to Gerry and to Axiomatics, a great team has gotten another great addition!


Posted in Uncategorized | Comments Off

New Webinar series on Claims

06.01.2010 by Felix Gaehtgens

It’s been a few years since Kim Cameron presented the Identity Metasystem around the concept of “Claims”. If you’ve been following Kuppinger Cole you know how positive we have been about this framework. Years later, Claims are a reality, and there are multiple platforms out there that support using them. We have been advocating the adoption of the Identity Metasystem’s concepts, and whilst not endorsing any particular platform per se, we acknowledge that there are several products out there that support this today. From our customers we often hear questions regarding the feasibility, questions about the approach and of course best practises for implementation. Naturally, there are questions around the software development cycle as well: do applications need to be fundamentally rewritten, or written differently to make good use of the identity metasystem? What should developer keep in mind to make their lives easier? How can applications be written to ease privacy and security?

I’m kicking off this new year with a brand new webinar series where we will focus on practical issues and implementation details. The Identity Metasystem is here today, and it’s here to stay, so let’s take advantage of it and unlock its potential. Without endorsing any product by itself, we’ll be looking at practical implementations – and indeed, products – to see how developers can harness the power of the Identity Metasystem today. Together with implementation tips, these webinars will feature good practises, and our guests are real experts in their particular implementation.

This format of this series is different from our regular webinars – they are not meant for decision makers, but for developers, architects and administrators, and therefore technical in their nature. If you’re interested in the topic and if you don’t mind seeing some tidbits of code thrown in there, then this is definitely for you. We’re extending an open invitation for open source projects and vendors – not to showcase their products – but instead show how developers can use their APIs and services. Of course I have a side agenda here as well ;-) What I am hoping is that in the end this will promote interoperability – we’re sure that there are some similarities in APIs and services, and hope that vendors will standardise – as users learn more about about these, they’ll put vendors under pressure to standardise their APIS and services :-)

Our first guests in the first webinar will be Dr. Steffo Weber and Abdi Mohammadi from Sun Microsystems. On Thursday the 14th of January at 17:00 MET (16:00 BST, 11:00 EST, 8:00 PST) they will show how us to harness Sun’s OpenSSO authentication and authorization mechanisms programmatically from any application (web applications, fat clients etc) via the following mechanisms:

– HTTP headers
– REST based web-service
– SOAP based web-service
– OpenSSO’s proprietary SDK

Steffo will demonstrate how to retrieve arbitrary user attributes from within a programme that is almost agnostic when it comes to technical details about the actual access management platform infrastructure (in this case, OpenSSO). Thus, using OpenSSO’s identity services does not require much knowledge about OpenSSO. In fact, it is easier to retrieve information from OpenSSO than e.g. from LDAP. Moreover, it can be used from any framework (Java, .Net, PHP, Ruby on Rails – you name it).

Steffo studied Computing Sciences in Bonn and Dortmund, Germany and holds a Ph.D. in theoretical computer science. He started as a  security specialist at debis IT Security Sevices in 1997. In 2000 he started working for Sun Microsystems, and is an expert on highly scalable web environments, IT security and cryptography as well as identity and access management. Apart from being very knowledgeable in the field he is also an excellent speaker and has presented at our European Identity Conference last year together with his colleague Abdi Mohammadi.

Abdi is a Principal Field Technologist at Sun. With more than 20 years of industry experience, he has been responsible for the architecture, design, end-to-end testing and optimization of Internet facing infrastructures as well as providing business strategy assistance to some of Sun’s largest and most strategic customers. Currently he is focused on directory, access management and messaging solutions at Sun.


Q & A from the XACML/ABAC Webinar

27.10.2009 by Felix Gaehtgens

On the Webinar that Babak and I did on ABAC and XACML three weeks back, there were quite a few questions that popped up! Unfortunately we did not have time to answer all of them during the webinar, so we promised that we would collect them and answer them afterwards.

BTW today there is another webinar on a related topic: The Critical Role of XACML in SOA Governance and Perimeter Web Service Security

Q: Please, specify the major difference between role mining (role consolidation based on role attributes) and the privilege giving mining approach?

A: (Babak) Role mining is about finding groups of permissions that can be bundled in terms of roles that can then be assigned to users. The idea of privilege-giving attribute mining is to find those attributes that affect permissions and use them to create access rules. Let’s take an example. In a business application, users may have been assigned permissions to Create and Release Purchase Orders, to Maintain Vendor Master data, Release Requisitions, Register Service Entry and Release etc. In a role mining project doing a bottom-up survey of permissions, an analysis of these permissions and how they are grouped into roles will be made. If a role called Purchasing combines all of the above permissions, we would probably identify a Segregation of Duties violation between the rights to Release Purchase Orders and the right to Maintain Vendor Master Data. As a result we would suggest remodeling of the Purchasing role to avoid the conflict. In a top-down approach, Role mining is about identifying a role in business critical processes that will need to be entitled with certain permissions in order to serve its purpose in that process. Role mining projects are typically about top-down and bottom-up combined, which in the end will lead to considerable efforts to map permissions to roles in such a way that everyone is able to do his or her job without acquiring excessive permissions – quite a daunting task.

An Attribute Mining project would very much like the top-down approach in role mining start with business processes to define which RULES for access can be derived. Examples: Attestation of purchase orders exceeding the amount of $xx, can only be made by users who a) belong to the cost center charged and b) have a management level of 10 or higher. From this rule we can derive that the following attributes are privilege-giving: a) user profile’s cost center assignment, b) users management level, c) purchase orders cost center and d) purchase order’s amount. To verify, these attributes would allow a rule to be formalized like this: If user.costcenter = purchaseorder.costcenter and user.managementlevel>=10 and purchase.amount<=$xx then permit else deny.

Q: Tell me more / define better what you mean when you talk about a missing context of the RBACs model?

A: (Babak) What we argue is that RBAC is a static model which makes it difficult to capture the context that may affect an access decision.  If we try to capture the context for an access in terms of roles then we will easily get a role explosion. We may for instance need to differentiate permissions depending on time of day – some users have access only during normal business hours whereas others have 7*24 access. This could lead to the creation of two roles, one for normal business hours, one for extended access. Add other context-related conditions such as remote login, authentication strength, line encryption etc. and we end up with the need to capture very many different roles. It is worth noting that normal ERP systems typically need to handle very large numbers of roles (thousands) internally to capture all their user permissions. If a combined role structure from multiple ERP systems must be established with contextual aspects included, role explosion issues simply become unmanageable.

Q:  I didn’t quite get the difference between attribute based access control and rule based access control. can you elaborate?

A: (Felix) In a nutshell, the main difference between ABAC and RBAC is that RBAC revolves around the concept of the role. ABAC can use any attributes (including the role) so it is much more flexible.

A:  (Babak) Attribute based access control is not in conflict with rule based access control. Rule based access control is about creating rules defining access permissions, but if these rules are based on attributes then we have a type of attribute-based access control.

Q: I understood there exists a better way in comparison to the RBAC model, but a language is not enough at all. You need a product which combines both. Is this the message you want to send out here?

A: (Babak) Well, the purpose of the workshop is to present the concept of ABAC and how it solves some of the common and well-known issues with RBAC. But you are right that we also need products to support this new approach. Axiomatics has a complete product suite to support xacml policy life cycle management 360. Most vendors of business applications and IAM products also have more or less elaborate support for XACML built-in.

Q: Is there a defined migration path from an established RBAC model to an ABAC model?

A: The OASIS XACML committee has released an XACML Profile for Role Based Access Control (RBAC) which can be used as a basis for migration projects. That said, one naturally needs to be aware of the constraints given by the architecture of legacy systems – “converting” an existing RBAC-based business application to an ABAC-based model may require a considerable effort. In some instances it may be more attractive to implement connectors that can provision attribute-based rules from a Policy Administration Point to application specific rule configurations which in turn may be RBAC based.

Q: How do you manage attribute based access to multiple resource? Traditionally, privilege attributes are bundled into roles and are assigned to users. If you have many attributes that control access to resources, doesn’t that increase administrative burden?

A: No, as we said in the presentation it will most likely be much less number of attributes needed to define access permissions than the number of roles. This is because we will define access rules based on the attributes rather than representing different set of permissions in terms of roles.

Q: Sounds like this method will have significant application impact – can you respond to this concern?

A: Yes, we believe that many applications will in the future implement the XACML request-response protocol. Already today, most large vendors of Identity & Access Management products or applications that handle business critical data have some sort of “XACML story”.

Q: Does ABAC related to Claim Based Authentication? Are they like corresponding concepts?

A: (Babak) Yes, one way to see claims is as provisioning of attributes to the access control system, so these are definitely complementary technologies.

A: (Felix) Authentication and authorisation are two different concepts, but of course they are related: authentication tells us who the user is, and then authorisation tells us whether the user is allowed to do something. The concept of Claim-based authentication is based on the fact that a “Claim” will already deliver attributes to an application. What happens then? These attributes could be made available to the authorisation engine.

Q: Are there any good resources and real world examples to get started with ABAC?

A:  (Babak) Well a good place to start with is the XACML TC page. Axiomatics has also a very informative website (www.axiomatics.com) with all introductory information regarding ABAC and XACML.

A: (Felix) We also have recently released a XACML Technology report that is available from our web site.

Q: RBAC seems after implementation quite static in maintenance ABAC seems intensive in maintenance, since attribute values vary over time (daily, hourly etc) would it not make maintenance costs more expensive and more complex?

A: (Babak) Well this is really the other way around. The idea is not to change the time attribute manually but to fetch the data from the right attribute source which is perhaps a clock.

A: (Felix) To add to Babak’s point there: ABAC will make use of information that already exists in an enterprise. The initial maintenance cost would be to deliver those attributes to the policy decision engine. And for that, good technology such as virtual directories already exist.


Google makes changes to Android Market, but many are still unhappy

28.09.2009 by Felix Gaehtgens

Under immense pressure from users and developers, Google has recently announced some changes to Android Market. But this may turn not be enough. Even though sales for mobile phones with Google’s Android operating system are ramping up, developers find it hard to make money on that platform. A recent bombshell was a blog post from Larva Labs towards the end of August. Larva Labs’ average income for all Android paid applications was only $62.39 per day – and that included games that are ranked #5 and #12 in the Android Market. This is a tiny figure when compared to Apple’s App Store, where a #5 position earns around $3500 a day according to sales figures from app vendors.

If developers cannot make a profit from their Android offerings, they will turn away from the platform. As of today, the Google Android Market forums are full of gripes from android developers trying to sell their software. A common complaint is about the way that applications are displayed in the Android Market. Up to now, developers could not post screen shots and were limited to a 325 character description of their program. Google has since announced that this limitation would be lifted in version 1.6 of the Android platform, which has been released recently.

Another frequent complaint is that Android users from many countries cannot buy applications at all. Users from other countries cannot even access free applications through the Android Market. Nor can developers in many countries sell their applications – instead, they are forced to hold them back or offer them for free. The only “supported” countries for paid applications are Austria, France, Germany, Netherlands, Spain, US, and UK and (since very recently) Italy. Users from those countries can buy applications, and developers from those countries (plus Japan) can sell applications.

That leaves many users and developers standing in the rain. Google is aware of the problem and states that it is “working hard” on this issue, but users are not convinced. Some of them are livid: “Who is sleeping behind his desk [at Google]” an angry Swiss user demands to know who has bought an Android handset just to find out that he cannot buy applications. Others are clueless: “Nokia doesn’t restrict countries with Ovistore [the equivalent of the Android Market for Nokia’s phones]. This is so unlike Google. Why are they punishing us for investing into their platform?” asks a Swedish game developer.

In the last two months, only one new “supported country” for paid applications has been added: Italy. This slow pace is hurting Google’s image in many countries, as handsets are being offered in countries but users effectively shut out of the Android market. But an even more serious side effect is starting to show: piracy. As users have no way to legally buy applications that they want, some are turning to illegal Android distribution sites, which are proliferating on the Internet.

The discussion forums are buzzing with developers complaining to be shut out. Others (from “supported” countries) are offering to resell applications from those that are shut out of the Market because of their location. Alternative distribution channels are also under discussion, but many developers believe that these pale in comparison with native market applications such as Apple’s App Store that come with the handsets.

But Google is aware of the problem. When asked, a Google spokesperson replied: “We’ll add support for additional countries in the coming months, but we have nothing to announce at this time”. Until then, many developers will need to make a difficult decision on whether they can make money on the Android platform.


Posted in Uncategorized | Comments Off

Beyond RBAC

28.09.2009 by Felix Gaehtgens

Please join me tomorrow for a free Webinar on the topic “Beyond Role Based Access Control – the ABAC Approach“.

Many – if not most – organisations are not getting as much value as they thought from RBAC (role based access control). In fact, many RBAC projects start with high expectations, but quickly get bogged down due to many issues and problems. Eventually it turns out that the initial expectations were too ambitious. But why? Is RBAC making promises that are difficult to keep?

Many in the industry (Babak and myself included) think that this is due to the fact that the real world just happens to be too complex to model efficiently with RBAC. This means that organisations must be realistic about what they can achieve with RBAC, and mitigate some of its shortcomings. But isn’t there a better way? There certainly is, and that’s what we’ll be speaking about tomorrow. There’s nothing wrong about roles per se, but we need to add more context to them. Then finally, we can reap the full benefits of agile access management, reach and even surpass the value that was expected from troubled RBAC projects.

I am delighted to speak again on a Webinar with Babak Sadighi, CEO and one of the founders of Axiomatics. Babak and his colleagues are an extremely smart bunch of people who are very passionate about access control. They have researched the topic for many years. I’ve interviewed Babak at the last European Identity Conference, which you can see here. So if you’re interested in access and role management, please join us tomorrow, I promise that you will not be disappointed :-)


Microsoft: minimum disclosure about minimum disclosure

03.08.2009 by Felix Gaehtgens

A good year ago, Microsoft acquired an innovative company called U-Prove. That company, founded by visionary Stephan Brandt, had come up with a privacy-enabling technology that effectively allows users to safely transmit the minimum required information about themselves when required to – and for those receiving the information, a proof that the information is valid. For example: if a country issued a digital identification card, and a service provider would need to check whether the holder over 18 years of age, the technology would allow to do just that – instead of having to transmit a full data set, including the age of birth. The technology works through a complex set of encryption and signing rules and is a win-win for both users who need to provide information as well as those taking it (also called “relying parties in geek speak”). With the acquisition of U-Prove, Microsoft now owns all of the rights to the technology – and more importantly, the associated patents with it. Stephan Brandt is now part of Microsoft’s identity team, filled with top-notch brilliant minds such as Dick Hardt, Ariel Gordon, Mark Wahl, Kim Cameron and numerous others.

Privacy advocates should (and are) happy about this technology because it effectively allows consumers to protect their information, instead of forcing them to give up unnecessary information to transact business. How many times have we needed to give up personal information for some type of service without any real need for this information? For example, if you’re not shipping anything to me… what’s the point of providing my home or address? If you are legally required to verify that I’m over 18 (or 21), why would you really need to know my credit card details and my home address? If you need to know that I am a customer of one of your partner banks, why would you also need to know my bank account number? Minimum disclosure makes transactions possible with exactly the right fit of personal details being exchanged. For those enterprises taking the data, this is also a very positive thing. Instead of having to “coax” unnecessary information out of potential customers, they can instead make a clear case of what information they do require for fulfilling the transaction, and will ultimately find consumers more willing to do business with them.

So all of this is really great. And what’s even better, Microsoft’s chief identity architect, Kim Cameron has promised not to “hoard” this technology for Microsoft’s own products, but to actually contribute it to society in order to make the Internet a better place. But more than one year down the line, Microsoft has not made a single statement about what will happen to U-Prove: minimum disclosure about its minimum disclose technology (pun intended!). In a post that I made a year ago, I tried making the point that this technology is so incredibly important for the future of the Internet, that Microsoft should announce its plans what do with the technology (and the patents associated for it).

Kim’s response was that Microsoft had no intentions of “hoarding” the technology for its own purposes. He highlighted however that it would take time to do this – time for Microsoft’s lawyers, executives and technologists to irk out the details of doing this.

Well – it’s been a year, and the only “minimum disclosure” that we can see is Microsoft’s unwillingness to talk about it. The debate is heating up around the world about different governments’ proposals for electronic passports and ID cards. Combined with the growing dangers of identity theft and continued news about spectacular leaks and thefts of personal information, this would really make our days. Unless you’re a spammer or identity thief of course.

So it’s about time Microsoft started making some statements to reassure all of us what is going to happen with the U-Prove technology, and – more importantly – with the patents. Microsoft has been reinventing itself and making a continuous effort to turn from the “bad guys of identity” a decade (in the old Hailstorm days with Microsoft Passport) into the “good guys” of identity with its open approach to identity and privacy protection and standardisation. At Kuppinger Cole we have loudly applauded the Identity Metasystem and Infocards as a ground-breaking innovation that we believe will transform the way we use the Internet in the years to come. Now is the time to really start off the transformative wave of innovation that comes when we finally address the dire need for privacy protection. Microsoft has the key in its hands, or rather, locked in a drawer. C’mon guys, when will that drawer finally be opened?


Finally: an open XACML API!

31.07.2009 by Felix Gaehtgens

Whilst at the Burton Group’s Catalyst 2009 conference, I ran into Prateek Mishra from Oracle who told me somewhere between the lines of our conversation that a new XACML API that has just been posted to the OASIS XACML TC. It was a “soft launch” that was announced at the Kantara meetings on Monday at Burton Catalyst (which very unfortunately, I missed). When Prateek mentioned it to me, it stopped me dead in my tracks, because I find it really significant news – a very important step towards flexible access control policy based on XACML. Before I get in the details, let me step back a bit and explain what this is, why I find this so significant and why it got me so excited.

XACML, the eXtensible Access Control Modeling Language is an XML-based standard for authorization and access control. It is based on the Attribute Based Access Control (ABAC) model that is hailed as the next generation of access control models. According to many, ABAC will ultimately replace RBAC (Role Based Access Control). Instead of only using a role as the determining factor whether to grant access or not, many attributes can be used. Of course roles can be used in ABAC as well – since ABAC can use multiple attributes to make access control decisions, the “Role” can be one of those attributes – so ABAC can emulate RBAC perfectly while adding many additional advantages. This means that it is possible to add context to the access control decisions and adds for a finer granularity, tighter controls and more flexibility for the business.

Here’s an example: I might be authorised to make bank transfers from an application. In RBAC, this would usually mean that I would have a role enabled for my account, for example “Make_Transfers”. Simple, right? Well, perhaps not so. As the need for control gets tighter, I may be authorised only to make transfers up to a value of 2000 EUR without any approval. Anything else above that requires the approval of at least two of the financial supervisors. So how would you do this with RBAC? Not really so easy. And with ABAC? Piece of cake. With RBAC, the bank transfer application would have to have some hardwired piece of logic implementing the “max 2000 EUR without approval”. With ABAC, the policy could just express that if I have the role “Make_Transfers” and “transfer_amount <= 2000” the operation is approved. ALso approved is an operation if I have the role “Make_Transfers” and “transfer_amount <= 2000” and “valid_approvals >= 2”. Everything else is denied.

So let me get back to the XACML API. There has been adoption by XACML, and I could even see it for myself here at Burton Catalyst 2009 just by meeting the sheer number of vendors that are actively supporting it and using it it for policy enforcement and access control. What has really been missing however was a ready-to-use API that would allow developers to make an access control request in their application and get a decision. Now we finally have an API that allows developers to do just that. I’ve spent over an hour yesterday hunched over my brand-new netbook with Prateek and Pat Patterson, poring over the API and can only say: thumbs up!

So what can this API be used for? Is it easy enough for developers to jump on and enable their applications for externalised access control? Well, not really. XACML is a very powerful and expressive policy modeling language, and also defines a request/response protocol. This creates a certain level of complexity. Whilst of course it is possible for application developers to use this API in their applications, I think that higher-level authorisation APIs are still needed that make it “dead easy” for developers to externalise access control. For matters of comparison, I was very impressed at how easy it is to .NET developers to harness the Geneva Framework (which is now called WIF or Windows Identity Foundation). Microsoft has made it truly “dead easy” for developers to make their applications ready for externalised authentication and claims – with just a few lines of “plumbing code”. Externalising authorisation must be made just as simple. The XACML API is a great start to provide a foundation that can be used to connect simpler APIs and existing access control frameworks to XACML.

Kudos for Cisco and Oracle for having contributed this. Great work, guys!


Virtual Directory Innovations

21.07.2009 by Felix Gaehtgens

As someone actively covering directory services and virtual directories, several innovations have caught my attention. The players within the virtual directory space are (in alphabetical order) Optimal IDM, Oracle, SAP, Radiant Logic, Red Hat, and Symlabs. When it comes to product development and innovation within the last year, you can split those vendors right down the middle. – Optimal IDM, Radiant Logic and Symlabs have been actively developing their product and churning out new features in new versions. The others have not been adding any features, but instead spent time changing logos, product names, default file locations and otherwise integrating the virtual directory products into the respective Oracle/RedHat/SAP identity management ecosystems. In fact, in some of the latter cases I ask myself whether it is likely to expect any virtual directory product innovations anymore.

So what’s new? Where’s the innovation happening?

Optimal IDM: New connectors have been added for Microsoft SQL Server 2008, eDirectory. A special version for Microsoft Sharepoint integration has also been released, as well as “automated compliance features” that monitor for changes that violate definable rules and alert administrators.

Radiant Logic: Its flagship product, formerly Radiant VDS (Virtual Directory Server) has been split up into to new products: The VDS Proxy Edition and the VDS Context Edition. The former is a classical virtual directory product that falls into the same category than Oracle VDS and the Symlabs Virtual Directory products. The latter is a mix of meta-directory and virtual directory features. Radiant Logic has rewritten major parts of the virtual directory core to make it more efficient in order to overcome performance problems that used to be a weak point in the product.

Symlabs: A full virtual tree functionality has been added. This makes the product easier to configure. In the past, a virtual tree had to be constructed by manually configuring plugins to filter and route requests. This had made configuration more difficult compared to other virtual directory products. This used to be a weak point in their products, like the performance used to be a negative point in Radiant Logic’s virtual directory server. Symlabs has also added a complete web-based remote administration interface that can be used instead of, or side-by-side with the local Java configuration interface.

What else is new? The latest piece of news comes from Symlabs who have released a competitive benchmark paper that contains the results of a comprehensive benchmark of the virtual directory servers from Oracle, Radiant Logic and Symlabs. The numbers speak for themselves. Of course, comparative tests by vendors must always be taken with a grain of salt. In the report, Symlabs encourages companies to do their own benchmarks to verify the results in the Symlabs study. However, the numbers are credible and document what has already been known for some time. The Symlabs product comes out as the fastest virtual directory. This is unsurprising, due to a very efficient internal design and a small footprint that this translates to a level of efficiency that surpasses other virtual directory servers.

At second place in the competitive benchmark comes Radiant Logic’s VDS Proxy Edition server, which is also interesting. Until end of last year, Radiant Logic’s virtual directory product was at the tail end of all performance benchmarks, beaten by both Oracle and Symlabs by – at least – a scale of magnitude. Radiant Logic has done some hard work last year to catch up, and it shows by surpassing the Oracle product in the benchmarks and coming in second place.

The virtual directory segment continues to be innovative. This is good for customers that are increasingly adopting virtual directories as simple point solutions to solve integration issues between applications and directory servers. However, innovation does not happen everywhere. It has been very quiet around Red Hat’s, SAP’s and Oracle’s virtual directory products for a long time – up to now, little has happened with those products. Optimal IDM, Radiant Logic and Symlabs have done some serious enhancements to their products and compete head-on in the virtual directory arena. Remember the old stereotype that smaller companies tend to be much more innovative than the larger ones?


Hooray, LDAPcon 2009 is coming up!

16.06.2009 by Felix Gaehtgens

I was delighted when I saw that LDAPcon is happening again this year. I went to the first event in Cologne, Germany 2007, and was very impressed. When you have the “creme de la creme” from the LDAP community talking about their favourite topic, you’re guaranteed an interesting and exhiliarating time – assuming that LDAP and directories are your thing.

I still remember last time how Howard Chu gave us a musical demonstration of how a well-performing directory should perform – on the violin! I don’t think anybody forgot that. We also got a very good overview of the different open source projects around directories, and about how to make good use of some of the LDAP extensions.

This time, we’ll also have two action-packed days, and the call for papers is open. I encourage everybody to share their best practises, vision and thought and make this an unforgettable event as well. I’ll be submitting for sure :-)

LDAPcon takes place in Portland and starts on September 20, a day before LinuxCon. The second day will be shared with LinuxCon, it seems. Might as well stay for LinuxCon as well! This is a good event not just for directory vendors and project maintainers, but especially also for those who deploy and run LDAP directories in challenging environments, and those who develop software that talks to LDAP servers. Kudos to the Symas guys for helping organise it (and they are just helping to organise it – it’s not at all an OpenLDAP conference, if that what you’re thinking). I’m definitely looking forward to it!

BTW I just saw that Ludo wrote about it as well, and even posted some photos from the 2007 event.


Posted in Uncategorized | Comments Off

UnboundID launches frontal attack on Sun – good idea??

11.06.2009 by Felix Gaehtgens

I recently received a press release from UnboundID announcing the availability of a new “synchronization server”. This software keeps two LDAP servers in sync (as the name suggests) – bidirectionally. In theory very useful, and it’s free too. But there’s a small trick: the synchronization server supports both Sun’s DSEE (Directory Server Enterprise Edition) and the new Unbound ID Directory Server. In the release, Unbound ID makes no secret of what this software should be used for: to migrate away from Sun’s directory toward Unbound ID’s competing solution.

UnboundID is a start-up based out of Austin, TX. It was founded by several ex-Sun employees, including Neil Wilson, author of the “slamd” load generation engine, and formerly one of the key people behind Sun’s OpenDS. I have already raved about their new LDAP SDK for Java, in my opinion the finest and most complete LDAP development kit for any language ever written.

The company is going after the very lucrative Telco and large service provider market, and launched a frontal attack on Sun Microsystems, who is the market leader in that space. UnboundID is offering a 30-40% reduction in yearly maintenance costs if customers switch from DSEE to their solution. Of course there is the usual fine print, and this offer is limited to medium-sized directories with less than two million entries. Why should Sun customers switch from DSEE to UnboundID Directory? According to UnboundID, their server is faster, has less footprint and is supported on a wider platform range.

It is not really obvious to me however why Telcos and large service providers would want to switch. For one, DSEE has been the de-facto market leader for massive-scale directory services, and customer satisfaction is high (not just if you believe the marketing – I’ve personally heard the same from Telcos using the product). A directory server running in a Telco is an absolutely super-critical component, and ripping it out and replacing it is akin to heart surgery. DSEE is very mature after having been around for many years and the kinks have been ironed out in many very large deployments a long time ago already (in fact, I was in one of those deployments in 2002 – that was fun). UnboundID would obviously need to make a very good case and give organisations a high level of assurance for them to switch over. The Telco sector is much more innovative than others, and tends to be on the bleeding edge of technology – but even so, there is a reluctance to switch from a very mature product that “just works” to a brand-new product.

That’s why UnboundID offers the “synchronization server” in order to try to entice organisations to run both directory servers next to each other for a longer period – to evaluate and eventually become comfortable enough with the UnboundID server to make the switch. It seems that the “synchronization server” has been written specifically for this purpose.

Which, personally speaking, I think is a bit of a pity, but hopefully UnboundID will realise the immense value that this synchronisation server could have once they’ve gotten over their frontal attack on Sun. A generic synchronization server that would keep multiple directories from multiple vendors synchronised is a fantastic value proposition, and I’m sure many organisations would jump at it. Especially when it comes from such brilliant minds like Neil Wilson’s who is known for his awesome LDAP stuff.


Services
© 2014 Felix Gaehtgens, KuppingerCole