Fusion Engines

15.07.2013 by Craig Burton

One of my favorite movies released in 2012 was Cloud Atlas. This is not necessarily an easy movie to watch or explain.

That is not the point I bring it up.

In one of the films many timelines, there is a post-apocalyptic setting where civilization is very primitive. In this primitive civilization, the two main groups are an islands main inhabitants—goat herders—and “Prescients” that are very advanced and seemingly from a different planet. Twice a year the goat herders and the prescients meet to barter and exchange information.

The goat herders are extremely curious about the prescients and how they travel with so magically across the waves of the sea. “Twice a year the Prescients come bartering on waves. Their ships come creep crawling just floatin’ on the smart of the old un’s.” Tom Hank’s narrative is magical and mysterious. Just as the knowledge of the Prescients seems to be to the goat herders. In a subsequent scene, the tribal elders can just not resist and more, and the ask Halle Barry’s character—the Prescient emissary—how their ships float on the waves.

She answer the question with complete honesty. “Fusion Engines…..” Everyone in the room nods and the term “Fusion Engines” gets past around the room as if it were obvious and plain. That the answers to the mysteries of the old ones had been finally revealed.

Tom Hank’s goes on to narrate that no one wanted as what a “Fusion Engine” was because they didn’t want to look stupid in front of the gathering.

Tech Talk and FusioOAuth, XACML, Federated Naming….. My point is, as technologists, we sometimes love the mystery and complexity of the language we use to talk about the trends and information we are discussing. It is hard to avoid it. This is complex stuff. There are sometimes no words in existence yet to clearly define to all those concerned how things really work.

Fusion Engines…. Just nod your head and look like it is normal. The reality is, there probably no one that knows exactly what all of this technology surrounding Identity and Access Management is. Further, people who do know are actually happy to tell you and wouldn’t think that anyone is stupid.

So if a “Fusion Engine” moment comes up for you. Don’t be hesitant to ask what is really being talked about. It helps everybody.


More Consolidation for the API Economy

24.04.2013 by Craig Burton

CA Technologies acquires Layer 7, MuleSoft acquires Programmable Web, 3Scale gets funding

It is clear that the API Economy is kicking into gear in a big way. Last week, Intel announced its acquisition of Mashery, this week, CA Technologies announced its acquisition of Layer7 , MuleSoft announced its acquisition of ProgrammableWeb and 3Scale closed a round of funding for 4.2M.

Money is flooding into the API Economy as the importance of APIs only heightens. Expect this trend to continue.

The upside of this flurry of activity is the focus being given to the API Economy.

But here is my assessment.

CA’s acquisition of Layer7 doesn’t necessarily bode well for Layer7 or its customers. CA as a large vendor will probably take longer than Layer 7 would do independently for defining and delivering on the roadmap, but they might put far more power behind such roadmap and its execution. Layer7 needs an upgrade and needs to move to the cloud. CA has a clear Cloud strategy it executes on – look at IAM and Service Management where a large portion of the products is available as cloud service; there is a strong potential for CA putting far more pressure behind the required move of Layer 7 to the cloud. Let’s see what happens there.

MuleSoft’s acquisition of ProgammableWeb is a little weird. John Musser is an independent well-spoken representative of the API Economy. MuleSoft has an agenda with its own platform. Does MuleSoft let Musser continue to be an independent spokesperson? Where does this lead to? All answers unknown.

3Scale gets a round of funding for 4.2M. It plans to add more extensions to the product and grow its international distribution with the funds.

Lots of activity here. Curious to see what happens next.

However, one thing is clear: The API Economy is going mainstream.


Intel Announces Mashery Acquisition

23.04.2013 by Craig Burton

From partnership to acquisition Let there be no confusion. Intel is a hardware company. It makes microchips. This is its core business. History shows that companies do best when they stick to their roots. There are exceptions. At the same time, Intel has always dabbled in software at some level. Mostly in products that support the chip architecture. Compilers, development tools and debuggers. From time to time, however, Intel ventures into the software business with more serious intentions. Back in 1991, Intel acquired LAN Systems in attempt to get more serious into the LAN utility business. This direction was later abandoned and Intel went back to its knitting as a chip vendor. Recently, Intel has started again to be serious about being in the software business. Its most serious foray was with the purchase of McAfee in 2010 to the tune of some 7.6 billion. A pretty serious commitment. We wrote recently about Intel’s intent to be a serious player in the Identity Management business with its composite platform Expressway API management. With that approach, Intel was clear that it had an “investment” in Mashery that would remain an arm’s length relationship best supporting the customer and allowing maximum flexibility for Mashery. In general, I like this approach better than an acquisition. Acquisitions by big companies of little companies are don’t always turn out for the best for anyone. Since then, it is clear that Intel management has shifted its view and thinks that outright ownership of Mashery is a better plan. While we agree that outright ownership can mean more control and management of direction, it can also mean the marginalization of and independent group that could possibly act more dynamically on its own. It is still too early to tell exactly how this will turn out for Intel and its customers, it will be important to watch and see how the organization is integrated into the company.


The Façade Proxy

18.03.2013 by Craig Burton

Securing BYOD

With the rapidly emerging cloud-mobile-social Troika coupled with the API Economy, there are so many questions about how to design systems that can allow application access to internal information and resources via APIs that will not compromise the integrity of enterprise assets. And on the other hand, how do we prevent inappropriate personal information from propagating inappropriately as personal data stores and information is processed and accessed? Indeed, I have read so many articles lately that predict utter catastrophe from the inevitable smart phone and tablet application rush that leverages the burgeoning API economy.

In recent posts, I have posited that one approach to solving the problem is by using an IdMaaS design for authentication and authorization.

Another proposed approach—that keeps coming up—is a system construct that is referred to as the “Façade Proxy.”

A place to start to understand the nature of Facades is in an article by Bruno Pedro entitled “Using Facades to Decouple API Integrations.”

In this article Bruno explains:

A Façade is an object that provides simple access to complex – or external – functionality. It might be used to group together several methods into a single one, to abstract a very complex method into several simple calls or, more generically, to decouple two pieces of code where there’s a strong dependency of one over the other.

facadepattern
Figure 1 – Facade Pattern Design Source: Cloudwork

What happens when you develop API calls inside your code and, suddenly, the API is upgraded and some of its methods or parameters change? You’ll have to change your application code to handle those changes. Also, by changing your internal application code, you might have to change the way some of your objects behave. It is easy to overlook every instance and can require you to double-check multiple lines of code.
There’s a better way to keep API calls up-to-date. By writing a Façade with the single responsibility of interacting with the external Web service, you can defend your code from external changes. Now, whenever the API changes, all you have to do is update your Façade. Your internal application code will remain untouched.

To shed even more light on how a Façade Proxy is designed and can be used to address yet another problem is blog post from Kin Lane. Kin is an API Evangelist extraordinaire and I learn a lot from him in his writings. Kin recently wrote in a blog post entitled “An API that Scrubs Personally Identifiable Information from Other APIs”:

I had a conversation with one UC Berkeley analyst about a problem that isn’t just unique to a university, but they are working on an innovative solution for.

The problem:

UCB Developers are creating Web Services that provide access to sensitive data (e.g. grades, transcripts, current enrollments) but only trusted applications are typically allowed to access these Web Services to prevent misuse of the sensitive data. Expanding access to these services, while preserving the confidentiality of the data, could provide student and third party developers with opportunities to create new applications that provide UCB students with enhanced services.

The solution:

Wrapping untrusted applications in a “Proxied Façade Service” framework that passes anonymous tickets through the “untrusted” application to underlying services that can independently extract the necessary personal information provides a secure way of allowing an application to retrieve a Web User’s Business data (e.g. their current course enrollments) WITHOUT exposing any identifying information about the user to the untrusted application.

I find their problem and solution fascinating, I also think it is something that could have huge potential. When data leaves any school, healthcare provider, financial services or government office, the presence of sensitive data is always a concern. More data will be leaving these trusted systems, for use in not just apps, but also for analysis and visualizations, and the need to scrub personally identifiable information will only grow.

Finally, Intel recently announced its Expressway API Manger product suite. EAM is a new category of service that Intel is calling a “Composite API Platform.” It is referred as such as the platform is a composite of a premise-based gateway that allows organizations to create and secure APIs that can be externalized for secure access through a cloud-based API management service from Mashery designed to help organizations expose, monetize and manage APIs to developers. In its design, Intel has created a RESTful Façade API that exposes APIs to developers for internal information and resources of an organization. It is very similar to the design approach outlined by Kin. This approach looks to be an elegant use of the Façade pattern to efficiently manage authorization and authentication of mobile apps to information that needs to remain secure.

composite API platform architecture
Figure 2 – EAM Application Life Cycle Source: Intel

I am learning a lot about the possible API designs—like the Façade Proxy—that can be useful constructs for organizations to successfully participate in the API economy and not give up the farm.


How to Make an API

21.02.2013 by Craig Burton

Introduction

Making an API is hard. It is also a tough question. A small company out of England has figured out how to let anyone make an API with just:

  1. Dropbox
  2. A Spreadsheet
  3. A Datownia SaaS account

Datownia

One of the activities I practice to keep up with what is happening in the world of APIs is to subscribe to the ProgrammableWeb’s newsletter. Every week the newsletter contains the latest APIs that have been added to the rapidly increasing list. While I seldom can get through the whole list, I inevitably find one or two new APIs that are really interesting.

Recently I ran into one that has an incredibly simple and effective method of creating an API out of a spreadsheet.

The Company is Datownia.com

I now have an API with a developer portal that is driven by data in a spread sheet.

I can distribute developer keys to any developer I choose and then that developer can access the data and integrate it into any app.

Further, any change I make to the spreadsheet get versioned and propagated to the API with just a click. To propagate the data, all I do is modify the spreadsheet and drop it into the linked DropBox folder.

Here is what my spreadsheet looks like.

clip_image002

 

 

 

Here is what the JSON look like when you make a restful call to the API location created for me by Datownia.

clip_image004

 

 

 

 

 

So simple.

I have been talking a lot about companies that manage already existing APIs. But what about organizations that need to create APIs?

A few weeks ago, I received an email from the CEO of Datownia wanting to give me a small gift to chat with him about what I was doing with their technology.

Of course as an analyst I can’t accept any gifts, but I had a great conversation with William Lovegrove about the technology and where the idea came from.

From one-offs to a SaaS

Basically William’s little consulting firm was busy building and evangelizing APIs to organizations. When a company was confronted with making an API, often progress would screech to halt or at least be diverted while things were sorted out. Often IT departments simply could not deal with making an API for anything. Other times they would be engaged into creating a one-time API for a company.

Complicated, expensive and not very efficient.

Datownia then came up with the idea of building a service in the cloud that automates the process of building and API.

I think this is brilliant.

If you need ana API, or just want to play with a prototype, you should take a look at how simple this is.

Thanks William Lovegrove and crew.


It Takes a Community to Manage an API Ecosystem

28.11.2012 by Craig Burton

Intro

Starting at the EIC 2012 I have been talking and presenting a lot about The API Economy. The API Economy has become a strategic topic for organizations. As one can expect with a hot topic, there are many opinions and views on the matter. Therefore there a many comments, blog posts and articles written about The API Economy.

Needless to say it is tough to keep track of everything being said or to track any given thread. I should start off by saying the questions asked by this blog post are appropriate and need to be answered.

The DataBanker thread

An interesting thread that I have been following for a while has inspired me to make a few comments about exactly what I mean by an API and to add additional data about the actual API numbers.

The people over at DataBanker published a piece in Sept. entitled “Personal Identity in the Cloud. What’s a Programmer to Do?

The author then goes on to cite the numbers I have used in several presentations to derive the actual number of APIs that we are looking at dealing with over the next five years. First he questions the accuracy of the numbers and their implications.

“I have to admit, the statistics from the Apple announcement, especially when combined with the view from Cisco, definitely make one stop and think. But Craig Burton’s blog post has apocalyptic overtones that I don’t think are accurate.”

Next he starts to ask questions about what I actually mean when referring and API.

“When Craig Burton refers to “20+ billion APIs all needing distinct identities”, what I believe he is actually referring to is interconnections and not discrete APIs.”

And finally the author states that the Identity Ecosystem being established by NSTIC will be used to address the problems brought on by The API Economy.

“Managing identity – entity or personal – within the Cloud certainly has some unique challenges. Fortunately, there are substantial communities such as the NSTIC Identity Ecosystem and projectVRM that are focused on defining standards for creating, validating, managing, and transacting trusted identities as well as looking at the broader issue of how individuals can control and assert their identity and preferences when engaging products, services, and vendors within this expanding internet of things. Multiple solutions will likely be proposed, developed, will co-exist, and eventually consolidate based on the collective wisdom and adoption of the cloud community. That community – or ecosystem – is really the key.”

So let me address each of these in turn.

The Apple and Cisco numbers and their apocalyptic overtones

First off, let me say that the numbers I quote from the iPhone5 announcement — while a little overwhelming — are very conservative. Mary Meeker — partner with Kleiner Perkins, Caufield and Byers — recently gave a talk about the growth of the device market. In that talk, she pointed out that the Android Phone is ramping up 6 times faster than the iPhone.

“By the end of 2013, Meeker expects there to be 160 million Android devices, 100 million Windows devices, and 80 million iOS devices shipped per quarter.”

If you can believe the first axiom of the The API Economy — Everything and Everyone will be API Enabled — the significance of this additional research on the numbers of devices being shipped is non-trivial. The current methods being used to provision and manage the identities associated with these devices are broken and cannot scale to address the issue. Call that Apocalyptic if you want, but ignoring the facts do not make them go away.

Interconnections not APIs

As I pointed out earlier DataBanker then supposes that what I mean 26+ billion APIs is referring to “interconnections and not discrete APIs.”

I am actually referring to a conservative number of discrete APIs. Here is how APIs work. Every API must have a unique identity. Not necessarily unique functionality, but a unique ID.

But DataBanker did find the missing information in my numbers. I didn’t include relationships and interconnections. I didn’t include them in the equation because I wanted to keep things somewhat simple. Fact is, each interconnection and relationship also needs an API and a unique ID. Thus the number of actual APIs we are looking at are 3 to 5 times bigger than the numbers I outlined originally.

NSTIC Identity Ecosystem will address the problem — NOT

Here is where DataBanker and I start to agree — at least sort of.

It will take a community to address The API Economy explosion in identities management requirements. Further the NSTIC and ProjectVRM communities can help, but neither of these in their current state address the matter. For more information about what NSTIC is in this context, read this blog post.

The Ecosystem required to address billions of Identities and APIs is one that can be automated. Programmed. In order to address a programmable web, we need a programmable ecosystem to accompany it.

We are calling this ecosystem Identity Management as a Service.

Summary

I continue to stand by my numbers and projections of the implications being brought on by the API Economy. I see that in the near future, everything and everyone will be API enabled.

I also see a great number of people and organizations that do understand this issue and are moving forward with intention to address it and to succeed with the API Economy.

Links

http://blogs.kuppingercole.com/burton/2012/09/21/salesforce-identity/

http://blogs.kuppingercole.com/burton/2012/06/05/microsoft-is-finally-being-relevant/

http://blogs.kuppingercole.com/burton/2012/06/21/making-good-on-the-promise-of-idmaas/

http://blogs.kuppingercole.com/burton/2012/06/06/what-i-would-like-to-see-first-from-idmaas/

http://www.id-conf.com/sessions/1001

http://blogs.kuppingercole.com/burton/2012/09/21/salesforce-identity/

http://techcrunch.com/2012/11/05/mary-meeker-internet-trends/

http://databanker.com/2012/09/20/personal-identity-in-the-cloud-whats-a-programmer-to-do/

http://slashdot.org/topic/cloud/rise-of-the-api-economy-in-the-cloud/


2012 International Oasis Cloud Symposium

17.10.2012 by Craig Burton

The Intersection of Policies, Standards & Best Practices for Robust Public Sector Cloud Deployments

Introduction

Last week I was invited to attend the 2012 International Oasis Cloud Symposium.

I was very impressed. The attendance was not large—in fact—the organizers limited the number of attendees to 125 people. I was not able to attend the first day, but the second day was lively with many interesting presentations and discussions.

I won’t go over the complete agenda, if you want to it can be located in PDF format here.

Overall I would say every presentation given was worth listening to and the information was both valuable and informative. Not all of the presentations have been posted yet but a good number of them—including mine—can be found at this location.

I wanted to highlight a few of the presentations that were especially interesting. Again, I think all of them are worth looking at, but here are some highlights.

Privacy by Design

The day started out with the Information and Privacy Commissioner of Ontario Canada—Dr. Ann Cavoukian—giving a presentation via videoto the group on Privacy by Design. Her message was that she and Dr. Dawn Jutla—more about Dr. Jutla in a second—are co-chairing a technical committee on Privacy by Design for software Engineers.

“It’s all about developing code samples and documentation for software engineers and coders to embed privacy by design into technology. We are going to drill down into the “how to” in our technical committee.”

Following the video by Dr. Cavoukian, Dr. Dawn Jutla gave a presentation about Privacy by Design (PbD).

Now I have heard of Dr. Cavoukian and the PbD movement. But I had never been exposed to any details. The details were amazing and I like the 7 Foundational Principles.

1. Proactive not Reactive; Preventative not Remedial

2. Privacy as the Default Setting

3. Privacy Embedded into Design

4. Full Functionality—Positive-Sum, not Zero-Sum

5. End-to-End Security—Full Lifecycle Protection

6. Visibility and Transparency—Keep it Open

7. Respect for User Privacy—Keep it User-centric

These are sound principles that make a lot of sense. So much so that I invited Dr. Jutla to attend the Internet Identity Workshop (IIW) and to jointly present with me a discussion about Privacy and Identity in an API Economy.

Dr. Jutla agreed and we will lead the discussion on both Tuesday and Wednesday of next week (October 23, 24) at IIW.

If you look at the agenda, the rest of the speakers presenting on privacy were stellar. I learned a lot.

Summary

I strongly recommend looking over the agenda and reviewing the presentations that interest you. For most organizations, this should be every plenary and every discussion group.

I was also impressed with the Oasis’ ability and willingness to invite seemingly competitive groups, like iso.org, ANSI, and Kantara. This is the way standards body should work when it has the best interest of the industry and objective of open standardization.

Kudos to Laurent Liscia and the entire OASIS organization for the execution of a great event.


Salesforce Identity

21.09.2012 by Craig Burton

Identity Management as a Service (IdMaaS) gets a new 500lb guerilla

Introduction

When I first heard of Salesforce’s Identity announcements this week at Dreamforce, I was reminded of the old joke “Q:Where does a 500lb. gorilla sit? A: Anywhere he wants.”

Salesforce Identity makes Salesforce the new 500lb gorilla in the Digital Identity jungle.

Announcement Details

You can read the basic details of the announcement on Chuck Mortimore’s blog. Here is a quick summary:

What is Salesforce Identity?

Salesforce Identity provides Identity and Access Management (IAM) services for Web and mobile applications, delivered through the simplicity, transparency, and trust of the Salesforce Platform.

  • For users, Salesforce Identity means no more frustration juggling passwords for each application. Login once and seamlessly access all your applications and data using Single Sign-On from a single, social Identity.
  • Administrators gain control and flexibility over access to applications by automating identity and access management processes through the simplicity you’ve come to expect from Salesforce.
  • CIOs can leverage existing authentication investments, while gaining control and peace-of-mind over your cloud investments via centralized reporting and deprovisioning.
  • Developers can build Web, mobile or tablet applications on the Salesforce Platform or on any third-party platform through simple standards based integration.
  • ISVs can tap into the power and distribution of AppExchange and Login with Salesforce regardless of where their app runs, be that Force.com, Heroku, mobile, or any other cloud.

High Level Analysis

I find is so fascinating that the laggard in joining the Cloud Computing parade—Microsoft—was the first to announce an IdMaaS initiative in a very low key understated way. And that the leader in the SaaS movement—Salesforce—shouts its IdMaaS strategy from the rooftops at its mainstream technology conference with Marc Benioff leading as the main spokesperson. It so underlines how clueless Steve Ballmer is to the issues facing Microsoft and its customers.

Identity, and solving the problem of Identity in a Cambrian Explosion of Everything is job 1.

  • There are some people at Microsoft that know this. This does not include Steve Ballmer.
  • As of today, everybody at Salesforce knows it and can’t avoid it. Marc Benioff made the announcement and outlined the vision for Identity in Salesforce’s future.

Putting it another way, the Computing Troika—Cloud Computing, Mobile Computing, and Social Computing—have forced to surface the issue of digital identity being the keystone technology issue for everything.

Without a tractable implementation of identity for the entire industry to use—think IdMaaS—all entrances to the future of computing collapse—the identity keystone holds it all together.

With Salesforce entering the IdMaaS business with its substantial vision, leadership and technology resources cannot help but have a positive effect for everyone in the long term.

Of course we will have to wait and see exactly what Salesforce delivers in the initial IdMaaS implementation, but Chuck Mortimore has an impeccable track record and knows his stuff.

I am impressed and will follow up after more information is available.


Identity in a Post-PC Era

17.09.2012 by Craig Burton

How 400M iOS devices changes everything

Most of the planet at least paid a little bit of attention to the announcement of the iPhone 5 on Sept. 12th. The anticipation for the announcement was so high, that sales of the iPhone 4 and iPhone 4s actually dipped some in the last quarter.

While I like all of the things Apple has done with the new iPhone — and I have already ordered mine — I found the other information given at the announcement to be astounding.

The numbers — presented in the keynote by CEO Tim Cook — were more than just significant. Especially when viewed from the perspective of the KuppingerCole API Economy Axioms.

These axioms are based on The API Economy phenomena that is occurring at the same time and the computing troika trends—cloud, social and mobile computing.

The KuppingerCole API Axioms

  1. Everyone and everything will be API-enabled
  2. The API Ecosystem is core to any cloud strategy
  3. Baking core competency in an API-set is an economic imperative
  4. Enterprise inside-out
  5. Enterprise outside-in

Axiom #1: Everything and Everyone will be API-enabled

Understanding the first axiom is straight forward. KuppingerCole envisions that everyone — meaning all entities not just people — and everything — even non-smart objects — will be API-enabled. It is also understood that being API-enabled necessarily requires at least one identity for everyone and everything. And in reality, almost everyone will have multiple personas and relevant identifiers and therefore multiple identities.

Now that I have set the context with Axiom #1, let’s look at what Mr. Cook talked about.

He first gave us the total number of iOS devices to date. I knew the total was large but I had no idea just how large. As of the end of June 2012, there are a whopping 400M iOS devices. The rest of the numbers are just as mind boggling.

  • 400 million iOS devices
  • 700,000 apps in the app store
  • Average person uses 100+ apps
  • 84 million iPads
  • 68% market share of the tablet market
  • 17 million iPads sold during April-June 2012
  • 94% of Fortune 500 investing in or deploying iPads at work

Now let’s add Cisco’s recent predictions to the mix.

  • 2.5 connections for every person on earth (19 billion) by 2016
  • 3.4 billion Internet users (45% of the planet’s population) by 2016
  • 1.3 zettabytes of annual IP traffic (Zettabyte = one sextillion or 1E+21) by 2016. This is four times as much traffic as in 2011.

If you follow the logic of my argument, there will be 20+ billion APIs all needing distinct identities by the year 2016.

Apple’s revelation of the actual numbers of iOS devices not only shows us that we are well on our way to that number, but in all likelihood we will surpass all predictions my some margin.

What does all this Mean?

The way we have been federating identities across domains using federated naming systems will simply not scale to address the needs we already have.

The wave of device proliferation isn’t coming in the future, it has already washed over us and is causing big identity related issues.

We all need to understand this phenomena and begin to engage in addressing the matter in an intentional way.

Let me explain a little more.

Today, all federated naming systems designed to map IDs to services are Admin-intensive. They all require and admin to make and verify the mappings by hand. One by one.

If you do the math, it would take more than a 640,000 admins working round the clock 5 years to get all of the mappings completed. And that is if it only takes 10 min or so per mapping and there are no mistakes.

In other words, today’s approach isn’t going to cut it.

We are in much need of an automated method to provision federated naming systems.

The good news is that there are initiatives a foot that could help us in these matters.

  • OpenID Connect — API specification for SAML and other protocols using OAuth 2.0
  • OAuth 2.0 — Standardized authorization delegation protocol
  • SCIM — System for Cross-domain Identity Management — standardized provisioning protocol
  • UMA — User-Managed Access — standardized user-managed Identity management protocol

Summary

The need to understand the identity explosion is not something that is in the future.

It already upon us.

We need to begin understanding the new wave of standards that will allow organizations to automate identity management in the enterprise post-haste.

There are dangers that need to be considered along this post-haste path.

None of the protocols — despite their rapid standardization tracking — have been proven to be tractable or robust enough to handle the extreme situation they are being thrust into.

We are in new — very exciting and rewarding — territory.

It is critical that we educate ourselves about the issues and keep abreast of what is happening.

Stay tuned.

 


Making Good on the Promise of IdMaaS

21.06.2012 by Craig Burton

As a follow up to Microsoft’s announcement of IdMaaS, the company announced the — to be soon delivered — developer preview for Windows Azure Active Directory (WAAD). As John Shewchuk puts it:

The developer preview, which will be available soon, builds on capabilities that Windows Azure Active Directory is already providing to customers. These include support for integration with consumer-oriented Internet identity providers such as Google and Facebook, and the ability to support Active Directory in deployments that span the cloud and enterprise through synchronization technology.

Together, the existing and new capabilities mean a developer can easily create applications that offer an experience that is connected with other directory-integrated applications. Users get SSO across third-party and Microsoft applications, and information such as organizational contacts, groups, and roles is shared across the applications. From an administrative perspective, Windows Azure Active Directory provides a foundation to manage the life cycle of identities and policy across applications.

In the Windows Azure Active Directory developer preview, we added a new way for applications to easily connect to the directory through the use of REST/HTTP interfaces.

An authorized application can operate on information in Windows Azure Active Directory through a URL such as:

https://directory.windows.net/contoso.com/Users(‘Ed@Contoso.com’)

Such a URL provides direct access to objects in the directory. For example, an HTTP GET to this URL will provide the following JSON response (abbreviated for readability):

{ “d”:  {
 "Manager": { "uri":"https://directory.windows.net/contoso.com/Users('User...')/Manager" },
 "MemberOf": { "uri":"https://directory.windows.net/contoso.com/Users('User...')/MemberOf" },
 "ObjectId": "90ef7131-9d01-4177-b5c6-fa2eb873ef19",
 "ObjectReference": "User_90ef7131-9d01-4177-b5c6-fa2eb873ef19",
 "ObjectType": "User",
 "AccountEnabled": true,
 "DisplayName": "Ed Blanton",
 "GivenName": "Ed",
 "Surname": "Blanton",
 "UserPrincipalName": Ed@contoso.com,
 "Mail": Ed@contoso.com,
 "JobTitle": "Vice President",
 "Department": "Operations",
 "TelephoneNumber": "4258828080",
 "Mobile": "2069417891",
 "StreetAddress": "One Main Street",
 "PhysicalDeliveryOfficeName": "Building 2",
 "City": "Redmond",
 "State": "WA",
 "Country": "US",
 "PostalCode": "98007" } 
}

Having a shared directory that enables this integration provides many benefits to developers, administrators, and users. If an application integrates with a shared directory just once—for one corporate customer, for example—in most respects no additional work needs to be done to have that integration apply to other organizations that use Windows Azure Active Directory. For an independent software vendor (ISV), this is a big change from the situation where each time a new customer acquires an application a custom integration needs to be done with the customer’s directory. With the addition of Facebook, Google, and the Microsoft account services, that one integration potentially brings a billion or more identities into the mix. The increase in the scope of applicability is profound. (Highlighting is mine).

Now that’s What I’m Talking About

There is still a lot to consider in what an IdMaaS system should actually do, but my position is that just the little bit of code reference shown here is a huge leap for usability and simplicity for all of us. I am very encouraged. This would be a major indicator that Microsoft is on the right leadership track to not only providing a specification for an industry design for IdMaaS, but also is on well on its way to delivering a product that will show us all how this is supposed to work.

Bravo!

The article goes on to make commitments on support for OAuth, Open ID Connect, and SAML/P. No mention of JSON Path support but I will get back to you about that. My guess is that if Microsoft is supporting JSON, JSON Path is also going to be supported. Otherwise it just wouldn’t make sense.

JSON and JSON Path

The API Economy is being fueled by the huge trend of accessibility of organization’s core competence through APIs. Almost all of the API development occurring in this trend are based of a RESTful API design with data being encoded in JSON (JavaScript Object Notation).  While JSON is not a new specification by any means, it is only in the last 5 years that JSON has emerged as the preferred — in lieu of XML — data format. We see this trend only becoming stronger.

JSON Path is to JSON what XPath is to XML. The following table shows a simple comparison between XPath and JSONPath.

xPath

JSONPath

Description

/

$

the root object/element
.

@

the current object/element
/

. or [ ]

child operator
..

n/a

parent operator
//

..

recursive descent
*

*

wildcard
@

n/a

attribute access. JSON structures don’t have attributes
[ ]

[ ]

subscript operator. XPath uses it to iterate over element collections and predicates. For JSON it is the native array operator.

|

[,]

Union operator in XPath results in a combination of node sets. JSONPath allows alternate names or array indices as a set.

n/a

[start:end:step] array slice operator

[ ]

?()

applies a filter (script) expression
n/a () script expression
() n/a grouping in XPath

Summary

As an industry, we are completely underwater in getting our arms around a workable — distributed and multi-centered identity management metasystem — that can even come close to addressing the issues that are already upon us. This includes the Consumerization of IT and its subsequent Identity explosion. Let alone the rise of the API Economy. No other vendor has come close to articulating a vision that can get us out of the predicament we are already in. There is no turning back.

Because of the lack leadership (the crew that killed off Information Cards)  in the past at Microsoft about its future in Identity Management, I had completely written Microsoft off as being relevant. I would have never expected Microsoft to gain its footing, do an about face, and head in the right direction. Clearly the new leadership has a vision that is ambitious and in alignment with what is needed. Shifting with this much spot on thinking in the time frame we are talking about (a little over 18 months) is tantamount to turning an aircraft carrier 180 degrees in a swimming pool.

I am stunned, pleased and can’t wait to see what happens next.

Reference Links

Identity Management as a Service — Original blog post by Kim Cameron

Reimagining Active Directory for the Social Enterprise (Part 1) — John Shewchuk’s post about Windows Azure Directory

Microsoft is Finally Being Relevant — My response to the announcement of IdMaaS

Reimagining Active Directory for the Social Enterprise (Part 2) — Shewchuk’s follow up post

The API Economy — KuppingerCole publication


Services
© 2014 Craig Burton, KuppingerCole