Friday, October 26, 2007

QEDWiki is a great tool, but is it really for creating mashups?

Believe it or not, this post is a review of QEDWiki, the mashup development tool being promoted by IBM. QEDWiki’s current home is with alphaWorks, whose charter is to put new IBM technologies into the hands of developers. AlphaWorks helps IBM get products and ideas fleshed out before bringing them to market. IBM says that 40% of alphaWorks offerings end up in IBM products. The other very cool message is that IBM is willing to experiment and drop 60% of its alphaWorks research products by the wayside. Wouldn't you just love to have IBM's research budget?

AlphaWorks has many products available for download, but the one currently getting all the press in mashup circles is, of course, QEDWiki, part of IBM’s mashup starter kit.

Unsurprisingly, IBM claims QEDWiki uses a wiki metaphor to allow non-developers to build mashups. Just as with a wiki, end users can create pages, modify or copy pages if they have permissions, share pages and even put in links to jump from one page to another.

That's just what I did. It was very easy to create a page and drop in widgets. I tied the various page elements together so clicking on a selection in a feed changed the contents of a URL widget elsewhere on the page. I used a web service from StrikeIron to look up the city and state associated with a zip code, and even let QEDWiki build the form to ask for input. It certainly wouldn’t take a developer to understand and use QEDWiki. While usability still needs some work (there are odd refresh problems and widget placement issues) the tool itself seemed sound.

After I pulled in a number of feeds, added widgets, and constructed URLs based on widget contents I had very quickly created a visually appealing and functional…portal.

Yep, I said portal.

Here’s the problem. When I wanted to bring different data streams together, I had to do it elsewhere, not within QEDWiki. So I found myself running to Yahoo! or Kapow or even del.icio.us to create feeds. (OK, the del.icio.us feeds weren't, strictly speaking, mashed) Once I mashed the feeds I could drop them onto my wiki page, but I couldn’t use QEDWiki itself to mash the contents.

Here’s another problem. When I wanted to pull data from different web pages into a single unified end-user experience, I had to run over to Kapow to create the web service, then go back over to QEDWiki to use the service within my wiki page.

You might conclude from what I’ve said so far that I didn’t like QEDWiki. On the contrary. I liked it very much. Other than the aforementioned usability issues, it was very easy to get started and definitely was not a tool that required development know-how. Once I had mashed content elsewhere I loved being able to drop elements on the page and link them together. It was great to pull search results based on a dissected URL parameter list. It was a no-brainer to click on a feed item and have another widget populated with the data served up from the item's link. I don’t pretend to have tried every single feature, but the features I did use were excellent.

It just wasn’t a mashup. QEDWiki consumes mashed content. It displays mashed content. It does not itself help mashers create mashed content.

That looks like it is the responsibility of Mashup Hub. Mashup Hub is the other part of IBM’s mashup starter kit, along with QEDWiki. I’ve queued it up for review some time in the future, but just looking at the specs, it looks like a server to transform behind-the-firewall data into feeds. While that’s great functionality, it’s hardly new or innovative. Unless I hear otherwise, I'm going to drop it at the end of my eval queue.

Go ahead and give QEDWiki a try. I'm betting you will enjoy working with it. Don’t get rid of your other mashup tools just yet, however. You will definitely need them.

Monday, October 22, 2007

How close are we to overcoming the 10 challenges facing business mashups?

Once again I’m going to delay my QEDWiki review. Really, I’m going to get to it. Honestly. However, I decided that I needed to discuss overcoming the challenges presented by Dion Hinchcliffe in his post last week, The top 10 challenges facing enterprise mashups. Simply taking a futurist approach as I did in my last post didn't seem like it would be enough.

Hinchcliffe’s post generated a lot of discussions within both the mashup community, and within Serena Software specifically. A number of us debated his points, discussed whether we could help overcome the challenges and even used his post to guide discussions on features we plan to put in our future product releases. I guess this makes Hinchcliffe an honorary Serena Product Manager. Thanks! And thanks to my many colleagues at Serena Software whose ideas have been integrated into this post.

Hinchliffe’s ten challenges fall into three broad categories: business challenges, governance challenges and technical challenges. Rather than addressing each of his ten issues, I’ll address the categories.

Business challenges: Lack of business support for mashups and lack of killer mashup applications.

Remember when the web started to grow? At first it was full of sites with pretty pictures and cool graphics. Organizations created websites as experiments or as another avenue for advertising. It wasn’t until the web killer app came along, eCommerce if you were wondering, that we had the dot-com explosion. We can repeat this story with SaaS and Salesforce.com. When the business sees a killer app, the business wants the killer app. Once we find the Salesforce.com equivalent for mashups, we’ll have the business lining up to invest.

Why hasn’t this happened yet?

Because we’re too busy talking about how cool mashups are. While cool is cool, it isn’t a killer app until it solves a business problem. We can take maps, charts and videos, we can pull in data from multiple sources and we can mash them together at the glass into a visually exciting experience for the mashup user, but no matter how cool it is, it won’t be a killer app until it’s scalable and useful. The problem with at-the-glass mashups is they don’t put the mashup in the context of a business activity. Yes, it’s great that I can pull data from many sources, but if the data aren’t actionable, what’s the point? If I can’t reuse the business logic across the organization, then why invest?

Let’s use an example. Assume I run a fleet of ice cream trucks and I want to make the best use of the trucks. I could use a presentation or data mashup to help by pulling local event information from online community calendars, school activity calendars, business announcements and even law enforcement announcements. I could map these events on a Google Map along with information about the likely size and times of the events. Using this information I could develop a schedule to optimize the routes of my trucks.

That’s a nice way to use mashups, but it isn’t a killer app. It’s not even a business mashup. It’s a data mashup with some cool graphics. A killer app would take the information from the mashup and use it automatically to schedule trucks, drivers and inventory to make sure the right trucks were at the right locations with the right inventory at the right time. The killer app would keep updating event information. A killer app would know when trucks are due for maintenance and schedule the maintenance around heavy usage days based on the mashed-up information. Our truck scheduling application is a business mashup because it puts the mashed up information in the context of the larger business problem, namely, optimizing ice cream truck utilization. The data aren’t enough. The data must be actionable and solve an actual business problem.

Once we understand that a killer mashup app has to be in the context of a business activity, that the mashup data has to be actionable, and that the mashup itself must solve business problems, then we will start to see a lot more businesses take mashups seriously. Until then, well we can always console ourselves that we are cool.

Governance Challenges: An immature services landscape, confusion over management and support of end-user mashups, chaotic data quality and accuracy and version management.

I’ve written about this issue before, both in my futurist post about the sematic web, and earlier when discussing the role IT can play as a trusted advisor to the business with respect to business mashups. Some discussions bear repeating, however, so I’ll cover some of the same ground again.

Lack of mashable content and data quality are interrelated. Without supported services tied to systems of record, mashers will have a difficult time ensuring the quality of their data. Long-term I believe this is a problem for the semantic web. Short-term, however, vendors need to start getting serious about enabling access to products through web services. At Serena we’ve already started this process, and we will continue to add services for the foreseeable future. As mashups become more accepted in the business community instead of just an IT tool I expect we will see this trend emerge with other software vendors. Note to business mashers: If you want your vendors to provide web services, you’d better start demanding them.

Management and support of mashups will be problematic and will get worse as more mashups are developed by the business community rather than IT. When talking to IT professionals about mashups developed by the business, this issue is where IT has the most heartburn. As Hinchcliffe notes, once upon a time this same scenario played itself out with PCs, databases and spreadsheets. The business started something, building applications, that it couldn’t support long-term and IT was tasked with providing support for applications about which they knew very little. IT has a long memory. I doubt if they will be taken by surprise again.

Surprised or not, IT isn’t going to be able to stop business mashers from developing mashups. Not only does the business have too much at stake, but the new generation entering the workforce doesn’t have a lot of patience with corporate hierarchies. They’ve grown up with technology and won’t wait around for IT to build their applications. To stay relevant, IT needs to become the partner of business and provide a secure and scalable infrastructure in which the business can build mashups.

It is inevitable, however that the business will eventually need support for their mashups. We could see a move towards centralization once more, just as we did when the business handed back all those Access databases to IT. However, business has a memory just as long as IT, and they will remember that while centralization did bring order to the mish-mash of rogue applications, the cost was business agility and strict IT control. I suspect that many on the business side of the house will look for an alternative.

Enter a new breed of vendor whose business will be to support the business. Budget oversight being what it is, these new vendors will likely provide support as part of a subscription process within a SaaS model. These vendors will need to fly under the capital expense radar and simply be a line-item on a department’s monthly expenses, similar to a cell phone bill. That means many business mashups will be purchased as part of a subscription model with support being provided by these new vendors. That way the business can build their mashups, but can also have a number to call when they need help.

I agree with Hinchcliffe that mashup version management has to be part of any mashup tool vendor’s offering. Lucky for Serena we’ve already got mashup version control as part of our mashup tools.

There is another version control issue that needs to be confronted, however. Version control of the individual services has long been a problem within SOA implementations. It’s a dark not-so-secret that uncontrolled services can cause disaster in SOA-based applications. If the SOA implementation has a successful reuse policy, the problem is even worse since a single bad service can bring down any number of applications. And yet there is no way for the SOA client to know whether a service has changed. Here vendors and 3rd party web service vendors need to be held accountable by consumers. Until that time, version control will continue to be a challenge.

Technical Challenges: No construction standards, the splintering of widgets, deep support for security and identity, and low-level mashup support by major software firms.

I’m bullish about overcoming the technical cited by Hinchcliffe. If we can get the business to throw their weight behind mashups, the vendors will have tremendous pressure to start providing some solutions that will make it easy for the business to adopt the business mashup model.

However, I’d like to challenge Hinchcliffe’s assertion that we need a unified method for mashup construction. Ditto for widget technology. It would be great if all the tools had a consistent approch, but I’m not sure I’d classify it as a challenge for mashup adoption in the enterprise.

Business mashers will have domain knowledge and a level of technical competence consistent with building Excel spreadsheet macros. Given that business mashups need to mash data and visual elements in the context of a business activity, it’s clear that model-based construction is the solution with legs. Our business mashers won’t be writing JavaScript. They won’t be writing any sort of code, even if that code is disguised as an XML document. They will be dragging and dropping visual, data and process elements using a familiar office-like interface. If that’s the case, the end user won’t care what is happening under the hood. A consistent method of construction may be a challenge for the vendors, but not for the mashers.

I do agree with Hinchcliffe that support for mashups among infrastructure and application vendors will continue to be an issue for some time. However, we might be able to solve some of the problems in the short-term. For example, if we are to put mashed content in the context of a business activity, we must have some sort of event driven architecture, or at the very least, a simple eventing system. Every vendor has one. Even Serena has one. We use the eventing system within the open source Eclipse ALF project. Eventing systems require participating software to kick off some external communication when important things happen.

Let’s consider our ice cream truck example. Ideally, the mashup would need an event to kick-off rescheduling truck routes when a concert gets cancelled, a new truck is purchased or a driver quits. the ALF project has tried to make the eventing system generic by providing web services to raise events, but again, the web services have to be tied to custom actions within the ALF event management system. While the pattern is the same for other vendors’ eventing systems, the devil is in the details.

One way to overcome this is to use eventing systems that already exist. Email leaps to mind, as do Outlook meeting reminders. Many back-end systems already know how to send emails and already integrate with outlook. While it may not be the best of all possible worlds, it would certainly jump-start event-oriented business mashups if the onus was on the mashup tool vendors to integrate with these existing event channels.

As for other low-level support, once again Hinchcliffe has it right. We can solve some of the issues, but the bulk have to wait until software vendors feel the squeeze from customers demanding low level mashup support.

I’ve saved the hardest problem for last: security. If anything is going to kill SOA and the companion consumption-side technologies, it will be security and identity management. Consider web-based applications. We’ve been at those for over ten years, and we still don’t have security under control. With SOA the problem is even worse because there are myriad potential back-end systems engaged in every mashup, and to date the most common method of passing around credentials is either as a parameter to service calls, or in the service header. With RESTful services the problem is aggrevated since the WS_* standards generally don’t apply at all.

One promising solution is the open source Eclipse Higgins identity management project. Many vendors have already signed up to use Higgins, but again, until all vendors adopt the standard, we are going to have the potential for serious security breaches within mashups. Especially mashups at the glass.

My conclusion is that yes, we have some challenges, but in many cases these challenges are either already in the works to be solved, or there is at least a roadmap for solving them. The ones that aren’t going to be overcome in the short-term will be side-stepped. How? I don't know. I do know that once the business understands the potential of business mashups, nothing will get in the way of widespread adoption.

Thursday, October 18, 2007

Can the semantic web help with the ten challenges facing enterprise mashups?

I'm going to delay my review of QEDWiki yet again to comment on Dion Hinchcliffe's post, The 10 top challenges facing enterprise mashups. Hinchcliffe's blogs about Web 2.0 have been very influential over the past few years, and this excellent posting is no exception.

Fair warning: I’m going to use his post as an excuse to go off on a futurist binge and talk about the semantic web. Don't worry, though. I'm going to talk about the 'real' semantic web, not the ivory tower version.

I won't reiterate Hinchcliffe’s points, you can, and should, read them for yourselves. However, I do want to talk further about two of his challenges that I think are related, and relate directly to the power of the emerging semantic web. His #2 challenge is an immature services landscape. There just aren't enough services out there to provide mashable content. His #6 challenge relates to data quality and accuracy. How do mashers know whether the data are accurate and up-to-date?

I see these issues as interrelated. The lack of 'supported' services is driving people to create services for themselves using various tools, HTML screen scraping being the one I've been working with lately. Before you dismiss screen scraping as a viable content creation strategy, note that the number of robots available from OpenKapow outstrips the number of services from StrikeIron and the number of APIs available from Programmable Web combined. Lack of services is causing people to turn to self-help methods to get mashable content directly from web pages. Yet we all know that web pages often have out-of-date data or even absolutely trash data.

Do you know about The Greys? The Greys are a crossbreed between human and an extraterrestrial reptilian species. By visiting this site I learned that there are over 70 distinct species of Greys. Wow! Good thing I have this website around to help me find such valuable information.

‘The Greys’ is an extreme example, but there are others that are less silly. If you were scraping content from the US Open site about who was in the women's final you would get one set of names for 2006 and another for 2007. Yet once the data is abstracted through a service call and incorporated into a mashup, it won't be obvious the 2006 data is out of date. Mashup user won't, and shouldn't, be able to tell from where the data came. Mashups are first and foremost about presenting a unified experience to the mashup user. Noting where data comes from makes the mashup less of a mashup and more like a plain old integration.

How can mashers solve this problem? One way is to create more supported services so mashers will depend less on tactics such as screen scraping to get their mashup content. I doubt this will work. By some estimates there are between 19 and 30 billion web pages today, and that doesn't even count dynamic pages such as search results from the Snap-on Tools site. We aren't going to create web services to expose reliable data for all of those pages. People who need mashable content are going to get it where they can, and that means web pages themselves.

Another way to help with the data reliability problem, and this is where I think the web is going, is to start leveraging the capabilities of the semantic web. I’m talking about the practical semantic web that is emerging from the likes of del.icio.us, Facebook and Amazon. I’m not talking about the ivory tower semantic web with volumes of ontologies, deductive rules and AI searches. Some call this emerging web “Web 3.0” and some say the ivory tower version of the semantic web is “Web 3.0.” Personally, I don’t care what we call it, but I’m excited about what it is, or rather, what it can become.

To backtrack, the semantic web is a way of structuring web content so that it can be consumed both by humans and by machines. Most web content today is only consumable by humans. (Irony. It's everywhere.) That’s why we get so many trash results even from the greatest search engines. In the ivory tower version, every web page has both semantic information (what the information on the page means) as well as content. The semantic information makes the page machine consumable. A phone number is a phone number is a phone number. Once a program knows the content is a phone number, it knows how to handle said content.

In theory, but not in reality, since there are many ways to tag and format a phone number.

In practical terms today, web content is being slowly categorized by various tag clouds such as del.icio.us, social network sites and blogging sites such as the one you’re visiting now.

Today these clouds are disaggregated without any sort of consistency. However, while it is unlikely we will get universal acceptance on what amounts to a tag dictionary, it is highly likely we can get universal acceptance of a small number of tags. This has already happened in specialty areas such as research libraries. Imagine a rating tag being adopted by all tag clouds so site visitors can rate the quality of a web page ala Digg, or an expiration date so mashers know when content is out of date, or even a copyright tag telling mashers the page is off limits for scraping. Not that mashers would pay attention.

Imagine a world where a masher pulling content from a web page through HTML harvesting of some sort could be given a rating of how good the data is likely to be. And even with disparate tag clouds, it would be possible for mashing tools to suggest alternative content pages. Imagine a world where the mashup itself could warn users if content quality degrades below some acceptable level.

Finally mashers and mashup users would be able to have some indication whether they are getting the latest scores, the most reliable news or the best information on extraterrestrial species.

OK, this is all for the future, but perhaps the not-to-distant future.

I’ll see what I can do to convince Serena Software to start thinking about the semantic web and how we can use it to help business mashers. Meanwhile, go give Hinchcliffe's blog post a thumb's up vote.

Tuesday, October 16, 2007

Is BPM another form of business mashup?

I was going to write a review of QEDWiki today, but IBM’s recent announcement about their starter kit has made me decide to leave it until I can do some more investigation. Given the flood of articles and blog posts about IBM’s announcement, I’m sure nobody will miss that I’m going to post on something else.

Specifically, what does the BPM community think about mashups?

BPM and SOA have been joined at the hip for several years. With SOA, business processes could be deconstructed into participating services which could be re-assembled like LEGO® bricks and executed in a production environment. I believe, although many BPM advocates would disagree, that service orientation saved BPM as a discipline. Once an organization had SOA, BPM could be used to solve myriad business problems rather than just model them.

Given the BPM-SOA link, I was very interested to read this article published on BPM Institute’s site titled SOA And Mashups – What to use when by Dr. Raj Ramesh, a BPM and SOA implementation specialist. Interestingly and surprisingly, Ramesh believes that mashups and SOA are distinct techniques for solving business problems. He says, “The question then is whether IT should use SOA or should it use mashups?” Ramesh goes on to conclude that IT should use mashups if the application is a one-off, and SOA, with BPM as the consumer, if services have the potential for reuse.

Why was I surprised? SOA versus mashups is an apples-to-oranges comparison. He should have compared BPM and mashups instead. BPM, presentation mashups and data mashups are all SOA consumers. You might want to read Rich Seeley’s article on SOA consumption patterns for a quick overview of how mashups and BPM are related. Service orientation enables mashups, it doesn't compete with them.

In fact, I would argue that BPM based on SOA is itself a form of business mashup. BPM is process-centric, pulls together content from many sources and presents a unified view of the content to the process participants. Isn’t that a business mashup?

Earlier in the piece Ramesh said that, “SOA provides a methodical paradigm for a robust long-term architecture based implementation. Contrast these to mashups that are easier to develop but are a challenge to manage due to the immaturity of the tools.”

Clearly BPM practitioners don't yet see the relationship between BPM and business mashups. Too bad. With a little effort aimed at making their tools easier to deploy, BPM vendors could already be sitting on some pretty mature business mashup tools.

Tuesday, October 9, 2007

IT should be an almost-silent partner of the business masher

I'm taking a break reviewing QEDWiki to comment on Tony Baer's excellent posting about the history of application innovation and the mashup. With mashups, he says, app dev innovation history is repeating itself. I couldn't agree more. Sometimes we forget lessons of the past in the rush to embrace the new and innovative.

Did I say "sometimes?"

We always do. And that's actually OK because behind every innovation there are a bunch of insiders who declare that it can't be done, shouldn't be done, or has already been done and failed. These people have an important place in the innovation loop, providing valuable negative feedback that should stop out-of-control oscillation. As long as the negative feedback doesn't overwhelm the positive motion of innovation, it's useful. Without collective amnesia, however, negative feedback can stop innovation in its tracks.

If collective amnesia is necessary and not at all evil, that may be why so much innovation has happened outside of IT. IT can't afford to have collective amnesia. They are directly responsible for keeping the CEO and CFO out of jail, making sure books get closed, people get paid, products delivered and customers billed. Radical innovation in that environment carries unacceptably high risk. That's why laptops infiltrated the workplace with new light-weight applications long before IT caught on. Web applications came out of marketing departments and mashups, well they are oozing out of everywhere. Everywhere except IT that is.

Baer's point, however, is that after the rush to innovate, mundane concerns eventually do set in. Once you've built a mashup, once the enterprise has started to depend on the mashup, how will the data be kept accurate? Who will maintain it? When it starts to be used in ways that the original masher didn't intend, who will support it? These are the everyday concerns that are in the DNA of professional coders in IT. When Computer Science professionals start to build an application, they think about these issues up-front.

Not so for the innovative business masher. He or she can build an application quickly, deploy it easily and watch it grow virally throughout the organization. Right up to the time the CEO gets bad information because the mashup referenced old data.

Crash!

So am I saying that business mashers need to stop innovating? Regular readers of this blog will know that I wouldn't say anything like that. Business developers are going to drive innovation further and faster than IT exactly because they aren't hampered by such mundane considerations. They need to keep moving forward, but they need help from IT even if they don't know it or are unwilling to admit it.

What sort of help? IT can give good advice about the best way to access data so it remains up-to-date. IT should set up a secure infrastructure so the business masher doesn’t inadvertently release sensitive data into the world. IT can also give some gentle technical advice about best practices for assembling applications. They can help mashers set up a very light-weight process for getting feedback (AKA bugs or enhancement requests.) and help them work with version control before disaster strikes.

Before you in IT decide this isn’t your problem, here's the money quote from Baer: “At the end of the day, mashups that evolve to enterprise mashups are not like enterprise applications. They are enterprise apps. The only difference is that they piece together much much faster.”

Again, I couldn't agree more.

Friday, October 5, 2007

Kapow's RoboMaker has lots of features, but needs a usability study

I've been working with Kapow's RoboMaker for the past few days, investigating it for ease of use with the business masher in mind. RoboMaker is a desktop tool that enables mashers to bring content together from many sources to create RSS feeds, RESTful web services and even portal content.

From a pure 'number of features' measure, it is hands-down the most capable product I've looked at so far. Once I figured out how to use it, I was able to create an RSS feed from a news article list on Serena's website in very short order. In fact, I used the same basic steps to build both a feed and a RESTful web service.

Kapow's approach is similar to that taken by Intel's Mash Maker. (Or rather, since Kapow's offering has been out for quite a while, Intel Mash Maker's approach is similar to Kapow's.) That is, a masher identifies the content to be mashed by pulling a web page into RoboMaker. the masher uses html tag paths to tell RoboMaker where to extract the content, and depending on the output destination, how to format the output.

RoboMaker has a number of additional features, such as the ability to branch, call out to services, simulate mouse clicks, execute javascript, etc. This is both a blessing and a curse. A blessing because the richness of features allows mashers more flexibility when developing content. A curse because without solid and user-friendly documentation, the extra features can be difficult to find and use.

I didn't see any evidence of a WADL, which isn't surprising considering the industry hasn't yet decided whether RESTful web services actually need a WSDL-like contract. Since it will be mighty difficult or a non-technical user to pull a service into a process-based orchestration engine without a contract, I hope we do embrace either WADL, or some other contract standard. Until then, I think Kapow should be proactive and publish both the service and the WADL contract. I know, that's easy for me to say since it isn't my R&D budget.

The resulting feed, service or portal content, as specified by the robot (in Kapow-speak) is stored 'in the cloud' on Kapow's server. Kapow also supports OpenKapow, a user community where mashers can publish, share, search for and discuss robots and mashups. (Note: Kapow also has an on-premise edition that I didn't test drive. I can certainly see how the on-premise edition would be very useful for organizations trying to leverage the valuable data that lies trapped behind the firewall.)

So in most ways, except for the sheer number of features, RoboMaker is a lot like Mash Maker. Since they both use HTML scraping, that isn't a surprise. Unlike Intel, however, RoboMaker robots produce mashable content that is based on standards, so you can create the mashup content using RoboMaker and use any mashup tool to build the mashup itself.

That is mighty cool.

But is it ready for a business user?

Back in my Unix days we used to say that the Unix shell was 'expert friendly.' That is, once you knew the ropes, you could use the shell to do an amazing number of things. That's the way I feel about RoboMaker. I found the documentation circular and confusing, and even more infuriating, error messages that were unhelpful and in many cases actually cryptic. However, once I found the rough edges and understood what was going on, I was able to do a lot of mashing very quickly.

Back to the question. Depsite the need for better documentation and a usability development cycle, RoboMaker is the first mashup tool, besides Serena Mashup Composer, of course, that I believe I could put in front of a business masher.

Tuesday, October 2, 2007

Business Mashups roll out in India

From ITVIDYA.com: The Chicago-based [sic] software applications company -- Serena Software has announced the availability of Serena Mashup Composer in the Indian market for the point-and-click creation of business mashups. Just as Web 2.0 technologies have made it possible for millions to create custom Web applications using open APIs and tools from various companies.

Well first, Serena is based in San Mateo, CA, not Chicago. That aside, this is a great article that accurately conveys Serena's business plan, both in the US and in India.

The concept of a 'business mashup' is fairly new, and not well understood today. For many people, process-centric mashups are akin to what you can build with Kapow's RoboMaker. In RoboMaker, mashup developers can use some process capabilities such as branching, to build an RSS feed or a REST-based web service. This is not what Serena means by process-centric mashups, however.

For Serena, a business mashup is much like a business process. That is, the process is long-running with a persisted state, spans functional silos and involves multiple stakeholders. A business mashup is also like a consumer and data mashup, with content pulled in from multiple sources, both within and outside of the organization. With Mashup Composer, business users can build these mashups without having to get IT involved.

I suggest you read the article and, of course, try Mashup Composer.

read more | digg story