This week, catching up on my reading. Some of the best
(or at least most stimulating) articles and white papers from the pile
on my desk.
Web Services Strategies
The
Three Steps to Web Service Integration. Some particularly
insightful thoughts in this article by Sean Baker, Chief Scientist
at IONA:
- By far the biggest danger with EAI integration projects is that they simply
produce larger islands of data, functionality and complexity.
- Middleware, by its nature creates proprietary solutions. [Excellent!]
- Web services, first and foremost, provide a way to break-out of this value destroying cycle.
- The real value of web services will lie in the ease of doing
business with a relatively small number of trusted suppliers.
Its real potential lies in the opportunity to create new, composite
applications that leverage the strengths and assets within an
organisation, or group of organisations.
Posted Saturday, October 19, 2002 10:57:24
AM |
|
Breaking
XML to Optimize Performance. Ron Schmelzer of ZapThink
makes the case for not employing any of the various schemes
for compressing XML. I agree.
Posted Saturday, October 19, 2002 10:14:52
PM
|
|
Some
Security Considerations for Service Grids. (PDF, 23pp.)
In this white paper, Martin Milani and John Seely Brown defend
certain security aspects of the Service Grid as put forth
in an October 2001 HBR article by Brown and John Hagel. I remain
a partial skeptic of the service-grid concept. I think
it's an important piece of the web-services puzzle (at least so
long as standards remain immature), but it also has a number of
disadvantages as compared to, say, application/XML firewalls.
Some points where I take exception with, or at least question,
Milani and Brown, if not the original Hagel-Brown article:
- Re: DoS attacks, I think a service-grid provider may actually
be more vulnerable than an individual corporate IP address for
two reasons. First, a well-known service-grid provider may be
considered a more enticing target by hackers, and therefore
attract more such attacks. Second, a DoS attack aimed at one
service-grid customer can impact the performance of other customers
by denying the availability of shared resources. IOW, if a hacker
goes after my web service that's advertised as responding at
a WSN that we both use, your web service may also suffer DoS.
This wouldn't be the case if we didn't share the resources.
- Milani & Brown write, As the service grid is the only connection
to the enterprise for specific services, the enterprise has
the ability to significantly simplify intrusion detection and
filtering, as only certain protocols and packets from the service
grid can be allowed. I don't see how this is any better
(but I admit, no worse) than a single corporate connection through
an XML firewall appliance. Someone still has to manage
the many point-to-point relationships involving a wide variety
of security technologies and standards. It's just a standard
outsource/DIY tradeoff. It isn't inherently better or even easier
if done by the service-grid vendor.
- Again, ...managing and maintaining the connectivity to
partners and service providers is far simpler and cost-effective
by connecting to a service grid rather than by directly connecting
point-to-point to all these partners and service providers.
In practice, the service-grid customer still has the obligation
to determine and express to the provider all of the rules
required to create and enforce the security policies. There
may only be a single physical/virtual connection to the service
grid, but there are no fewer application-to-application (or,
more precisely business-to-business) relationships. The service-grid
vendors don't make business decisions for you. The only step
they'll handle is transcribing into their systems the rules
you create.
- Another important point is that mediation services are
provided by the service grid. I've contended in the past
that such mediation is only required when the associated standards
either haven't been agreed to or haven't been universally adopted.
Today, this is the case for much of the web-services protocol
stack, but over time it will be less so. Milani & Brown may,
in fact agree, if my inference from the following is correct:
It also speeds up the adoption of web services by organizations,
as they do not need to wait for standards and the expensive
and cumbersome task of replacing already existing technologies
and infrastructures.
Posted Saturday, October 19, 2002 9:55:38
PM |
|
Making Web Services Work. (PDF) This is an excellent explanation of the issues addressed by the Business Transaction Protocol (BTP). [Source: Mark Potts, Talking Blocks]
Posted Saturday, October 19, 2002 9:16:19
PM
|
|
Iterative
Improvement. In a web-services world it's often less expensive
to modify a service than it is to redesign and reprint a paper
form. Does this mean we can relax the usual fanaticism for the
top-down design of our business processes? Can we actually plan
for a more iterative design approach? In terms of version control,
adding an output XML element or attribute is almost always
backwards compatible, something that's been elusive with older
technologies. Just something I've been thinking about.
Posted Saturday, October 19, 2002 7:11:40
PM
|
|
Hagel
on Loose Coupling. More gems from John Hagel's October
9, 2002 weblog post explaining loose coupling:
- [Loose coupling reduces...] the risk that changes within one module will create unanticipated changes within other modules.
- Software has remained tightly coupled because of the inability of major vendors to agree on a universal set of standards to define interfaces across software modules. [That sounds obvious, but the implications are significant.]
- Business practices have largely evolved around available technology. Since most application software was very tightly coupled or hard-wired, business practices have tended to be tightly coupled or hard-wired.
Posted Saturday, October 19, 2002 9:22:44
PM |
|
Web
Services: Is it CORBA Redux? Gordon Van Huizen warns that
the RPC model for SOAP has flaws that could turn web services
into this decade's CORBA. "In the world of business computing,
things like scheduled downtime, system failures and network outages
are the reality. RPC-style Web services can't cope with such intermittent
behavior."
[Source: Brent
Sleeper who writes, Caveat: Van Huizen is a product manager
for Sonic's messaging-oriented middleware products. Still, he
makes a persuasive argument for the choice in many enterprise
web services scenarios.]
Posted Saturday, October 19, 2002 2:03:59
PM
|
|
The Web Services Scandal. (PDF) Modulant's Jeffrey Pollock writes:
- Web services technology, despite its potential benefits, is limited in its ability to work with randomly formatted, non-standard data or data not based on XML.
- Most of the web services community is aligning around ebXML
for the advanced business process capabilities and standard
vocabularies embedded in the specification. [Really? We must
be talking to different people.]
- Standards are a painfully slow way to develop common vocabularies, they're not easy to change, and they're certainly misused and abused.
- The hype around dynamic discovery is overblown.
- Vendors aren't focused on the semantic element of the problem. [But is
that their role?]
[Source: EAI Journal]
Posted Saturday, October 19, 2002 9:25:36
AM |
|
Web Hosting Strategies
Sticks and Carrots: Penalties and Incentives in Outsourcing Agreements. (PDF) A good whitepaper that stresses the importance of incentives for outsourcing:
- The goal of penalties in outsourcing agreements should not be to punish, to pay less for poor service, or to recover costs incurred as a result of problems. Rather, penalties should prevent problems from occurring in the first place and,
failing that, should produce changes.
- Spreading penalties across too many insignificant targets simply waters down the ability to have any impact on top priorities. In other words, any single miss results in only a minor annoyance for the vendor, who has little incentive to change behavior and strive for quality.
- Given a penalty "budget" of 15 percent of the contract price, knowing where to "invest" that 15 percent is the magic formula.
- Incentives tied only to the attainment of service levels are generally a bad idea -- why should clients pay a bonus for services they’re already paying to receive? Moreover, exceeding established service level targets generally adds no value to the client.
And much more of value. [Source: Compass]
Posted Saturday, October 19, 2002 7:45:30
PM |
|
Larry
Lessig Takes on Mickey Mouse. Written by Thomas Greene,
the U.K.-based The Register published one of the best analyses
of Eldred v. Ashcroft (regarding the 1998 copyright extension
law) argued in front of the U.S. Supreme Vourt by Lawrence Lessig.
Posted Thursday, October 10, 2002 2:49:29
PM
|
|
A
Defense Against Intentional P2P Stream Corruption. Phil
Windley referred to my paper on using Information
Additive Codecs (IAC) on P2P Networks, and asked, 'I was wondering
if anyone has done any work in this area that uses IACs in the
presence of byzantine faults. My gut tells me there's something
there and the work would be important since one would want to
thwart the efforts of "bad guys" to corrupt a data stream by sending
bad IAC data (think RIAA, for example).'
I replied to Phil, "My initial instinct is that the solution
is to use PKI to digitally sign each chunk of data with the author's
or legitimate distributor's private key. Then it would be essentially
impossible to corrupt the results by inserting counterfeited chunks.
Every receiver would have access to the public key in order to
ascertain the authenticity of the data on a chunk-by-chunk basis.
"Another way (perhaps not as elegant) would be to give a unique name or number identifier to each chunk, create an MD5 digest of each one, and separately download the MD5 fingerprints from a known (not P2P) source. IOW, I get my high-volume data through the P2P streaming mechanism, but separately I get the much smaller MD5 files in order to certify the streams."
Posted Saturday, October 05, 2002 12:29:36
PM
|
|
Doug's Appearances
Web Services, Trick or Treat? Web-hosting vendor Conxion is sponsoring Web Services, Trick or Treat: Don't Let the Missing Pieces Scare You on Tuesday, October 29, at Zibibbo restaurant in Palo Alto, CA. On the panel:
- Antonio Salerno, Conxion CEO
- Steve Holbrook, Program Director, Emerging e-Business Standards, IBM Corporation
- Dave Wright,.NET Solutions Architect, Microsoft Corporation
- Doug Kaye, RDS Internet Infrastructure Consulting
- Brent Sleeper, the Stencil Group
Attendance is by invitation only. Contact Phyllis
Davidson, Sr. Manager, Marketing Programs and Alliances, Conxion.
(408) 566-8529. |
|
Subscription
and Contact Info
The IT Strategy Letter is published weekly by Doug Kaye.
The content is identical to Doug's
weblogs.
|
|
|