This article explores cognitive dissonance in AI, focusing on the inconsistencies in AI system outputs due to conflicting data, rules, or patterns. It examines key issues such as drift, bias, overfitting, underfitting, explainability, transparency, data privacy, security, hallucinations, and data poisoning. The article also provides strategies for addressing these challenges, emphasizing the importance of continuous monitoring, bias mitigation, model complexity balance, enhancing explainability, robust data governance, and protection against data poisoning. The goal is to develop more reliable, fair, and trustworthy AI systems.
Continue readingCategory Archives: tech
Why AI Responses Vary: Understanding the Subjectivity and Variability in AI Language Models
Artificial Intelligence (AI) language models often generate different responses to the same query, leading to perceptions of inconsistency and subjectivity. This article delves into the reasons behind this variability, including the probabilistic nature of AI, contextual dependence, diversity in training data, and other influencing factors. It also offers insights on achieving greater consistency in AI interactions.
Continue readingSecure Your Site: A Comprehensive Guide to WordPress Backup and Restoration
Backing up and restoring a WordPress website is a critical task for website administrators, ensuring that website data is not lost due to unforeseen circumstances such as server crashes, hacking, or accidental deletions. This article will guide you through the processes involved in backing up and restoring your WordPress website, an overview of popular backup and restore plugins, help you to choose the appropriate backup and restore approach, and hopefully help you recover your site quickly and efficiently when needed.
Continue readingThe Test of Commitment and The Nuances of Estimation: Lessons from the IT Trenches
Across three decades of diving deep into the intricacies of IT, from software engineering to enterprise architecture, there’s a multitude of lessons I’ve unearthed. However, two recent experiences bring to light some invaluable insights about software delivery and the broader strokes of business ethos.
Consider a scenario where your project hinges on the delivery timeline of an external firm. A firm that’s been given two years to code a mere API HTTP PUT request, with the deadline stretching from mid to end September. The stakes? The very funding of your project. Enter the vendor’s sales representative: brimming with confidence, he assures an on-time delivery. Yet, when playfully challenged with a wager to affirm this confidence, he declines. A simple bet revealing the chasm between rhetoric and conviction.
Such instances resonate with an enduring truth in software delivery and business: actions always echo louder than words. The real “test” in business is not just about meeting estimates or deadlines, but about the conviction, commitment, and authenticity behind the promises.
However, alongside this test of commitment, lies another challenge I’ve grappled with regardless of the cap I wore: estimating software delivery. Despite my extensive track record, I’ve faced moments when estimates missed the mark. And I’m not alone in this.
Early in my career, Bill Vass, another IT leader, imparted a nugget of wisdom that remains etched in my memory. He quipped, “When it comes to developer estimates, always times them by four.” This wasn’t mere cynicism, but a recognition of the myriad unpredictabilities inherent in software development, reminiscent of the broader unpredictabilities in business.
Yet, the essence isn’t about perfecting estimates. It revolves around three pillars: honesty in setting and communicating expectations; realism in distinguishing optimism from capability; and engagement to ensure ongoing dialogue through the project’s ups and downs.
In the grand tapestry of IT and business, it’s not always the flawless execution of an estimate or a delivered promise that counts. At the end of the day, an estimate is just that; an estimate. The crux lies in how we navigate the journey, armed with authenticity, grounded expectations, and unwavering engagement. These cornerstones, combined with real-world lessons, are what construct the foundation of trust, catalyse collaborations, and steer us toward true success.
Comparing Technical Proving, MVP, and Spike in Enterprise Architecture
Introduction
As enterprise architects navigate the complex landscape of delivering value and mitigating risks, different approaches come into play. Two prominent methods, Technical Proving and Minimum Viable Product (MVP), offer unique benefits in enterprise architecture. Additionally, the concept of a “spike” provides a focused investigation to address specific uncertainties. In this article, we will compare Technical Proving and MVP while also discussing the characteristics and purpose of a spike, offering insights into their respective roles in enterprise architecture.
Technical Proving
Validating Technical Concepts Technical Proving involves building small-scale prototypes or proofs of concept to validate the feasibility and viability of technical concepts. Its primary objective is to evaluate technical aspects such as architecture, frameworks, performance, scalability, and integration capabilities. By identifying potential risks early on, architects can make informed decisions and mitigate any issues that may arise during implementation.
Benefits of Technical Proving
- Risk Mitigation: Technical Proving minimizes risks by validating technical concepts before full-scale implementation. It helps identify potential roadblocks or challenges, enabling proactive mitigation.
- Informed Decision-Making: By rapidly prototyping technical elements, architects gain valuable insights into the feasibility of various solutions. This knowledge empowers them to make informed decisions and streamline the development process.
- Resource Optimization: Technical Proving ensures efficient resource allocation by focusing on high-potential solutions and discarding unfeasible options. It prevents unnecessary investments in non-viable concepts.
Minimum Viable Product (MVP)
Delivering Value and Gathering Feedback MVP is an approach that involves developing a functional product with minimal features and capabilities to address a specific problem or deliver immediate value to users. The primary goal of an MVP is to obtain feedback from early adopters and stakeholders, enabling architects to iteratively refine and enhance the product based on real-world usage and user input.
Benefits of MVP
- Early Validation: By releasing a minimal version of the product, architects can validate their assumptions and gather valuable feedback. This enables quick iterations and improvements, enhancing the chances of success in the market.
- Cost Efficiency: MVPs focus on delivering essential functionality, reducing development costs and time-to-market. By avoiding extensive upfront investment in unnecessary features, resources can be allocated more effectively.
- User-Centric Approach: MVPs prioritize user feedback and involvement, ensuring that the final product aligns closely with user needs. This customer-centric approach improves user satisfaction and increases the chances of successful adoption.
The Role of a Spike
In addition to Technical Proving and MVP, another approach called a spike plays a distinct role in enterprise architecture. A spike is an exploratory investigation that focuses on addressing specific uncertainties or concerns, usually in a time-bound and limited-scope manner. Unlike Technical Proving and MVP, a spike is not intended for broad validation or market testing but rather for gathering targeted knowledge or data.
Characteristics of a Spike
- Targeted Investigation: Spikes focus on exploring a specific area of concern or uncertainty, providing deeper insights into a particular problem or technology.
- Time-Bound: Spikes have a fixed timeframe allocated for the investigation, ensuring focused and efficient efforts.
- Learning and Discovery: The primary goal of a spike is to gather knowledge and insights that can guide decision-making and inform subsequent development efforts.
Differentiating Spike from Technical Proving and MVP
While Technical Proving and MVP serve broader purposes, spikes are narrow and point-specific investigations. Technical Proving validates technical concepts, MVP delivers value and gathers feedback, while spikes focus on targeted exploration to address uncertainties.
Conclusion
In the realm of enterprise architecture, Technical Proving and MVP offer valuable approaches for validating concepts and delivering value. Technical Proving mitigates technical risks, while MVP emphasizes user value and feedback. Additionally, spikes provide focused investigations to address specific uncertainties. Understanding the characteristics and appropriate use cases of these approaches empowers architects to make informed decisions, optimize resource allocation, and increase the chances of successful outcomes in enterprise architecture endeavours.
John Zachman, father of Enterprise Architecture, to present at the next BCS Enterprise Architecture Speciality Group event on Tuesday the 6th of October, 2009
Wow! The BCS Enterprise Architecture Speciality Group has secured John Zachman, the de facto father of Enterprise Architecture, and creator of the “Zachman Framework for Enterprise Architecture” (ZFEA), to speak at its next event on Tuesday the 6th of October: Talk about a major coup. The BCS EA SG is really getting busy and is the fastest growing BCS Speciality Group I’ve seen so far, with 750+ members, and is gaining new members on a daily basis.
Come along and see John speak about “Enterprise Design Objectives – Complexity and Change”, at the Crowne Plaza Hotel, 100 Cromwell Road, London SW7 4ER on Tuesday the 6th of October, 2009. You can register your place here: http://www.ea.bcs.org/eventbooking/showevent.php?eventid=esg0908
Of course, a serious advantage of the BCS EA SG is that it is framework agnostic, and as such can look at best practices and framework capabilities from across the EA community. In fact, less than six months ago a preceding event was an update on the recently released TOGAF 9 standard from the Open Group (typically seen as one of the other major Frameworks, alongside ZFEA, although you often encounter Organisation using a blended, best-of-breed, approach when it comes to EA implementation).
The BCS EA SG has got some other great events lined up, and I’m especially looking forward to hearing “Links with other IT disciplines such as ITIL and strategy” on Tuesday the 15th of Decemeber, 2009, over at the BCS London head quarters at 5 Southampton Street, London WC2E 7HA. Details of this event are still being confirmed, but it’ll be great to see how thoughts on mapping major capabilities to EA match with my own (I’ve been doing rather a lot in terms of co-ordinating EA, Service Management and Portfolio Management lately). Plus since TOGAF 9 removed the genuinely useful appendices showing mappings between TOGAF, ZFEA, and other disciplines and frameworks, promising to have them published as stand alone white papers, it’s great to know that experience and knowledge in this important area has not been forgotten and in fact is being collated and compiled by the BCS EA SG team.
I’m really looking forward to seeing John speak on the 6th and if you can make it I hope to see you there too! And please do come over and say “Hi” if you get chance.
- Recovered link: https://horkan.com/2009/09/17/john-zachman-bcs-enterprise-architecture
- Archived link: https://web.archive.org/web/20100531083459/http://blogs.sun.com/eclectic/entry/john_zachman_bcs_enterprise_architecture
- Original link:
http://blogs.sun.com/eclectic/entry/john_zachman_bcs_enterprise_architecture
The problem with automated provisioning (III of III)
The third of my articles on the macro-level issues with (automated) provisioning, which build on the previous articles, specifically the comparison of Enterprise versus “web scale” deployments described in “The problem with automated provisioning (II of III)” and the levels of complexity, in terms of automated provisioning, set up and configuration that is required.
As I’ve said before in this series of articles provisioning a thousand machines which all have the same OS, stack and code base, with updated configuration information is easier to set up than a thousand machines which use a mixture of four or five Operating Systems, which all have differing patch schedules, patch methods and code release schedules, with a diverse infrastructure and application software stack and multiple code bases. And to express this I’ve postulated the equation “(Automated) Provisioning Complexity = No. of Instances x Freq. of Change”.
What I’d like to move the focus over to is that of runtime stability and the ability of a given system to support increasingly greater levels of complexity.
I find that it is important to recognise the place of observation and direct experience as well as theory and supposition (in research I find it’s useful to identify patterns and then try to understand them).
Another trend that I have witnessed in regards to system complexity, including the requirement to provision a given system, is that the simpler and more succinct a given architectural layer, the more robust that layer is and more able to support layers above it which have higher levels of complexity.
Often Architectural layers are constrained in terms of there ability to support (and absorb) high numbers of differing components and high rates of change by the preceding layer in the stack. AKA the simpler the lowest levels of the stack the more stable they will be and thus more able to support diverse ecosystems with reasonable rates of change in the layers above them
The more complex the layer below the less stable it is likely to be (given the number of components and instances thereof and the rate of update which significantly drive up the level of complexity of the system).
This phenomenon is found in the differing compute environments I’ve been speaking about in these short articles, and again they affect the ability of a given system to be provisioned in any succinct and efficient manner.
More accurate Enterprise
Typically Enterprise IT ecosystems are woefully complex, due to a mixture of longevity (sweating those assets and risk aversion) and large numbers of functional systems (functional as in functional requirements) and non-functional components (i.e. heterogeneous infrastructure, with lots of exceptions, one off instances, etc.).
Subsequently they suffer from the issue that I’ve identifioed above, that is as lower levels are already compolex, they are constrained in the amount of complexity that can be supported at the level above, the accompanying diagram demonstrates the point.
More accurate Web Scale
Whilst Web Scale class systems often exhibit almost the opposite behaviour. Given they often use a radically simplified infrastructure architecture anyway (i.e. lots of similar and easily replaceable common and often ‘commodity’ components) in a ‘platform’ approach, there isn’t often the high levels of heterogeneity that you see in a typical Enterprise IT ecosystem (homogeneous). And this approach is often found in the application and logical layers above the infrastructure, i.e. high levels of commonality of software environment, used as an application platform to support a variety of functionality, services, code and code bases.
Subsequently, because of the simple nature of low level layers of the architecture they are much more robust and capable of withstanding change (because introducing change into a complex ecosystem often leads to something, somewhere breaking, even with exceptional planning). This stability and robustness ensures that the overall architecture is better equipped to cope with change and with the frequency of change, and that layers of high levels of complexity can be supported.
And so that concludes my articles on provisioning, and the problems with it, for the time being, although I might edit them a little, or at least revisit them, when I have more time.
The problem with automated provisioning (II of III)
The second of my articles on the macro-level issues with (automated) provisioning and focusing again on the theme of complexity as a result of “No. of Instances” x “Freq. of Change” described in the previous article “The problem with automated provisioning (I of III)“, but this time comparing an Enterprise Data Centre build-out versus a typical “Web Scale” Data Centre build-out.
Having built out both examples demonstrated I find the below a useful comparison when describing some of the issues around automated provisioning and occasionally why there are misconceptions about it from those who typically deliver systems from one of these ‘camps’ and not the other.
Enterprise
Basically the number of systems within a typical Enterprise Data Centre (and within that Enterprise itself) is larger than that in a Web Scale (or HPC) Data Centre (or supported by that organisations), and the differing number of components that support those systems is higher too. For instance at the last Data Centre build out I led there were around eight different Operating Systems being implemented alone. This base level of complexity, which is then exasperated because of the Frequency of having to patch and update this (as demonstrated by “Automated Provisioning Complexity = No. of Instances x Freq. of Change” equation) significantly impacts any adoption of automated provisioning (it makes defining operational procedures more complex too).
Web Scale
Frankly a Web Scale build out is much more likely to use a greater level of standardisation to be able to drive the level of scale and scaling required to service the user requests and to maintain the system as a whole (here’s a quote from Jeff Dean, Google Fellow, “If you’re running 10,000 machines, something is going to die every day.”). This is not to say that there is not a high level of complexity inherent in these types of system, it’s just that in order to cope with the engineering effort required to ensure that the system can scale to service many hundreds of millions of requests it may well require a level of component standardisation well beyond the typical you’d see in an Enterprise type deployment (where functionality and maintenance of business process is paramount). Any complexity is more likely to be in the architecture to cope with said scaling, for instance distributed computational proximity algorithms (i.e. which server is nearest to me physically so as to reduce latency versus which servers are under most load so as to process the request as optimally as possible), or in the distributed configuration needed to maintain said system as components become available and are also de-commissioned (for whatever reason).
Automated Provisioning Complexity = No. of Instances x Freq. of Change
At the most base level provisioning a thousand machines which all have the same OS, stack and code base, with updated configuration is easier to set up than a thousand machines which use a mixture of four or five Operating Systems, which all have differing patch schedules and patch methods, with a diverse infrastructure and application software stack and multiple code bases. I suspect that upon reading this article you may think that it was an overtly obvious statement to make, but it is the fundamentals that I keep seeing people trip up on over and over again which infuriates me no end, and so, yes, expect another upcoming article on the “top” architectural issues that I encounter too.
HPC, or High Performance Computing, the third major branch of computing, build-outs usually follow the model above for that of “web scale” ones. I have an upcoming article comparing the three major branches of computing usage, Enterprise, Web Scale, and HP, in much greater detail, however for the time being the comparison above is adequate to demonstrate the point I am drawing to your attention; that of complexity of environment exasperating implementation of an automated provisioning system. Hope you enjoyed this article, it is soon to be followed by a reappraisal and revised look at Enterprise and Web Scale provisioning.
The problem with automated provisioning (I of III)
Referring back to my previous article “The problem with automated provisioning – an introduction” once you get over these too human of issues into the ‘technical’ problem of provisioning then I’d have been much nearer the mark in my initial assessment, because it is indeed an issue of complexity. The risks, costs, and likely success, of setting up and maintaining an automated provisioning capability is integrally linked to that of the complexity of the environment to be provisioned.
There are a number of contributing factors, including, number of devices, virtual instances, etc., location and distribution from the command and control point, but the two main ones in my mind are “Number of Instances” and “Frequency of Change”.
And so ‘Complexity’, in terms of automated provisioning, at a macro level, can be calculated as being “Number of Instances” versus “Frequency of Change”.
No. of Instances x Freq. of Change
By “Number of Instances” I mean number of differing operating systems in use, number of differing infrastctrue applications, number of differing application runtime environments and application frameworks, number of differing code bases, number of content versions being hosted, etc.
By “Frequency of Change” I am drawing attention to patches, code fixes, version iterations, code releases, etc., and how often they are delivered.
The following diagram demonstrates what I frequently call ‘The Problem with Provisioning’; as you can see I’ve delineated against three major architectural “levels”, from the lowest and nearest to the hardware, the OS layer which also contains ‘infrastructure software’, the Application layer, containing the application platform and runtime environment, and the “CCC” layer containing Code, Configuration and Content.
In a major data-centre build-out it is not atypical to see three, four or even more, different operating systems being deployed, each of which is likely to require three or six monthly patches, as well as interim high value patches (bug fixes that effect the functionality of the system and security patches). Furthermore it’s likely the number of ISV applications, COTS products, and application runtime environments will be much higher than the number of OS instances, and that the amount of “CCC” instances will be even higher.
I find it important to separate the system being provisioned into these three groupings because, typically they require differing approaches (and technologies) for the provisioning thereof, something I mentioned in the previous article when organisations mistakenly believe that the provisioning technology that they have procured will scale the entire stack, from just above ‘bare metal’ to “CCC” changes (I’ve seen this issue more than once, even by a Sun team who should of known better, albeit it was around three years ago).
This model brings to the fore the increasing level of complexity, both of components at each layer, and the frequency of changes that then occur, and although the model above is a trifle simplistic, it is useful when describing the issues that one can encounter with implementing automated provisioning systems, especially to those with little knowledge or awareness of the topic.
The problem with automated provisioning – an introduction
I was going to start this short series of articles with the statement that the problem with provisioning is one of complexity, and I’d have been wrong, the predominant issues with provisioning, and specifically automated provisioning, are awareness and expectation.
Awareness and Expectations
The level of awareness of what can actually be done, and often, more importantly, what cannot be done, with automated provisioning, or even what automated provisioning actually “is” is a significant barrier, followed by the expectations set, both by end users with a hope for IT “silver bullets”, who may well have been oversold, and Systems Integrators, product vendors and ISVs who sadly promise a little too much to be true or are a trifle unaware of the full extent of their own abilities (positivity and confidence aside).
For instance I was once asked to take over and ‘rescue’ the build out of a data centre on behalf of a customer and their outsourcerer (£30M+ to build out, estimated £180M total to build and run for the first five years).
Personally I would say that this data-centre build out was of medium complexity, being made up of more than five hundred Wintel servers, circa three hundred UNIX devices, and around two hundred ancillary pieces of hardware including network components, firewalls, switches, bridges, intelligent KVMs and their ilk, storage components, such as SAN fabric, high end disk systems, such as Hitachi 9900 range, high end tape storage, etc., and other components.
One of the biggest problems in this instance was that the contract between client and vendor stipulated using automated provisioning technologies, not a problem in itself, however an assumption had been made, by both parties, that the entire build out would be done via the provisioning system, without a great deal of thought following this through to it’s logical conclusion.
Best to say here that they weren’t using Sun’s provisioning technology, but the then ‘market leader’, however the issues were not to do with the technology, nor functionality and capabilities of the provisioning product. It would have been as likely the similar problems would have been encountered even if it had.
This particular vendor had never implemented automated provisioning technologies before on a brand new “green-field” site, they had always implemented them in existing “brown-field” sites, where, of course, their was an existing and working implementation to encapsulate in the provisioning technology.
As some of the systems were being re-hosted from other data-centres (in part savings were to be made as part of a wider data-centre consolidation), another assumption had been made that this was not a fresh “green-field” implementation, but a legacy “brown-field” one, however this was a completely new data-centre, moving to an upgrade of hardware and infrastructure, never mind later revisions of application runtime environments, new code releases, and in-part enhanced, along with, wholly-new functionality too. AKA this was not what we typically call a “lift and shift”, where a system is ‘simply’ relocated from one location to another (and even then ‘simply’ is contextual). Another major misconception and example of incorrectly set expectation was that the provisioning technology in question would scale the entire stack, from just above ‘bare metal’ to ‘Code, Configuration and Content’ (CCC) changes, something that was, and still is extremely unlikely.
Sadly because of these misconceptions and lack of fore-thought predominantly on behalf of the outsourcerer no one had allowed for the effort to either build-out the data-centre in entirety and then encapsulate it within the provisioning technology (a model they had experience of, and which was finally adopted), nor allow for the time to build the entire data-centre as ‘system images’ within the provisioning technologies and then use it to implement the entire data-centre (which would have taken a great deal longer, not only because testing a system held as system images would have been impossible, as they would have to be loaded into the hardware to do any testing, either testing of the provisioning system, or real world UAT, system, non-functional, and performance testing).
Unsurprisingly one of the first things I had to do when I arrived was raise awareness that this was an issue, as it had not fully been identified, before getting agreement from all parties on a way forward. Effort, cost, resources, and people, were all required to develop the provisioning and automated provisioning system in a workable solution. As you can guess there had been no budget put aside for all of this, so the outsourcerer ended up absorbing the costs directly, leading to increased resentment of the contract that they had entered into and straining the relationship with the client, however this had been their own fault because of lack of experience and naivete when it came to building-out new data-centres (this had been their first so they did a lot of on the job learning and gained a tremendous amount of experience, even much of this was how not to build out a data centre).
This is why I stand by the statement that the major issues facing those adopting automated provisioning is one of awareness of the technology and what it can do and one of expectations of the level of transformation and business enablement it will facilitate, as well as how easy it is to do. The other articles in this series will focus a little more on the technical aspects of the “problem with provisioning”.
Jakob Nielsen: “Mobile User Experience is Miserable”
Latest research into mobile web user experience says that overall the experience is “miserable”, and cites the major issues with Mobile web usages, as well as looking at overall “success” rates which, although improved from results of research in the mid-1990’s are much lower than typical PC and workstation results.
It is well worth a read for those looking at optimising for mobile readership and audience and the full report is available here: http://www.useit.com/alertbox/mobile-usability.html
This new report names two major factors to improving the aforementioned success rates; that is sites designed specifically with mobile use in mind and improvement and innovations in phone design (smart phones and touch screens perform best).
Jakob Nielsen, ex-Sun Staff member and Distinguished Engineer is famous for his work in the field of “User Experience”, and his site is a key resource to getting advice and best practice in terms of web, and other types of, user experience design.
- Recovered link: https://horkan.com/2009/07/22/jakob-nielsen-mobile-user-experience
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/jakob_nielsen_mobile_user_experience
- Original link:
http://blogs.sun.com/eclectic/entry/jakob_nielsen_mobile_user_experience
Bill Vass’ top reasons to use Open Source software
You might not have seen Bill Vass’ blog article series on the topic of the top reasons to use and adopt Open Source software; and as it’s such an insightful series of articles I thought I’d bring it to your attention here.
Each one is highly data driven and contains insight that you probably haven’t seen before but is useful to be aware of when positioning Open Source to a CTO, a CIO or an IT Director, because of Bill viewpoints (having come from a CIO and CTO background). Often when you see this sort of thing written it can be rather subjective, almost ‘faith based’, so I’m always on the lookout for good factual information that is contextually relevant.
Bill Vass’ top reasons to use and adopt open source:
And before you mention it, I know Bill already summarised these articles in his lead-in piece “The Open Source Light at the End of the Proprietary Tunnel…“, but it was such a great set of articles it seems a shame not to highlight them them to you!
The Reasons Projects and Programmes Fail
In this post I’ll be describing the five categories that I’ve identified of reasons that Projects and Programmes Fail, this categorisation has been built up from doing a large number of system, project and programme reviews and audits over the years, and this article follows on from the project review and programme audit framework which I wrote about recently.
Whatever problems are found in a project or programme in my experience they can be broken down into these five categories:
- Strategic / Alignment
- Contractual / Financial
- People / Politics
- Process / Procedural
- Technical / Architectural
For a number of years my categorization of reasons why projects and programmes fail did not include “Strategic / Alignment” as an area, and was a model made up of just the other four categories, but then I kept coming across a couple of definitive reasons why it should be added; more on this below.
So lets look at these five categories individually in more detail:
-
- Strategic / AlignmentA fundamental lack of strategic alignment to the Business has been made.
Basically the project should never have been commissioned in the first place. It is either not required whatsoever (and yes, shockingly, I have come across this happening), or is no longer required (either because of a change of business circumstance, or functionality overlap with another system, i.e. something else does this just fine thank you very much).
A lack of an Executive Sponsor is a good indication that this could be an issue, and even if the project or programme is some form of ‘Skunk Works’ you would expect the overall ‘Skunk Works’ innovation concept and framework to be supported by an Executive Sponsor, such as the Head of R&D;, and for a watching brief kept over costs versus potential revenues and benefits.
Projects and programmes which are purely or highly non-functional, and provide limited, or unperceived business benefit, may also be an indication of this issue.
- Strategic / AlignmentA fundamental lack of strategic alignment to the Business has been made.
-
- People / PoliticsGetting people to work together can be complex and difficult, especially when their goals are not co-ordinated. Long term political enemies, people competing for resources, promotions and remuneration, are all potential issues.
This magnifies up at a macro level into business units being in competition for talent, resources and even access to customers and partners. Programmes where multiple business units have to work together and integrate systems and functionality are almost always problematic, even when there are serious penalties if it not done.
In general Governance compliance issues and management failings also fall into this category, as do business conduct issues, moral, etc.
- People / PoliticsGetting people to work together can be complex and difficult, especially when their goals are not co-ordinated. Long term political enemies, people competing for resources, promotions and remuneration, are all potential issues.
-
- Process / ProceduralThe process is ‘broken’. Procedures are not in place or are not being complied with. It is either the wrong process to have been used in the first place, is not being adhered to correctly, or is not even being used at all.
A process is in place but it is over subscribed and can not ‘scale’, alternately a process does not have enough people to service it, perhaps because of downsizing or such.
The status of a capable Project Management Office (PMO), or of stable, authoritative, Document Repository are also indicators that there is a problem in this area, as is a lack of due diligence when managing and implementing change control.
Governance in terms of appropriate operating model and related procedural items are here too.
- Process / ProceduralThe process is ‘broken’. Procedures are not in place or are not being complied with. It is either the wrong process to have been used in the first place, is not being adhered to correctly, or is not even being used at all.
-
- Contractual / FinancialFor some reason the financial arrangements of the project are having a negative impact on the ability to deliver that project. The contract is counter intuitive perhaps, or is weighted in such a way that means that the ends are not easily achieved, or does a poor job of defining the requirements.
If you hear something like the “spirit of the contract” versus the “word of the contract” then this is a good indicator that there is an issue with the contract and that it doesn’t cover what is wanted or expected.
Be aware that this is likely to be a problem shared by the client and their vendors as mutual understanding of what can be delivered versus what is wanted and needed by the business. This a re-iterative learning process as the business learns more about what can be delivered by technology and the system defined, whilst those involved in delivery learn the semantics, language and nature of the business and experience more of the challenges that the business has.
- Contractual / FinancialFor some reason the financial arrangements of the project are having a negative impact on the ability to deliver that project. The contract is counter intuitive perhaps, or is weighted in such a way that means that the ends are not easily achieved, or does a poor job of defining the requirements.
- Technical / ArchitecturalThis is last for a very good reason, and that is this is often the least contributing factor in terms of projects failing to deliver.
When there are issues in this area in my experience it is often one of not having the appropriate people and skills at the right time, or not even identifying the key individuals you require accurately, rather than hard technology issues.
Other issues are architectural and compositional problems (more on architectural issues in an upcoming article), access to resources at the right time, and the typical technology compatibility issues (i.e. “what works with what”) and access to vendor technology and knowledge bases.
As a reviewer of projects and programmes which could be failing it’s likely that you will have come from a technology implementation background and that this area is well within your ‘comfort zone’, but I assure you that in the majority of cases technology may well be a minor contributing factor to the failure of an overall project, nor is the hardest problem area to improve upon (with good recommendations, of course), but it may be an area that you could potentially over focus upon and lose sight of more significant issues at hand.
Again, hope you enjoyed the article, will try and look at some other pieces, such as the top architectural mistakes made, how to identify possibly failing projects and suggestions for rescuing them.
- Recovered link: https://horkan.com/2009/07/21/reasons-projects-and-programmes-fail
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/reasons_projects_and_programmes_fail
- Original link:
http://blogs.sun.com/eclectic/entry/reasons_projects_and_programmes_fail
Project Review and Programme Audit Framework – a simple example of it’s use
This is a simple example review utilising the project review and programme audit framework that I wrote about in the proceeding article. …..
- Recovered link: https://horkan.com/2009/07/20/example-project-review-programme-audit
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/example_project_review_programme_audit
- Original link:
http://blogs.sun.com/eclectic/entry/example_project_review_programme_audit
Project Review and Programme Audit Framework
So this is the framework I have developed and use for reviewing and auditing failing projects, programmes and systems. As I might have said before this is a simple, effective, framework, based on my experience, and although you might have seen approaches like this before, this is one that I have personally used to great effect.
To an extent this framework is a description of how I document a review and the process steps that I take as well, the major difference is that the process itself is likely to be re-iterative and you well learn things during the review which generate fresh lines of inquiry.
I get asked to perform these types of review probably because I’ve done a large number of them and have become quite good at them, however originally I think it was because I have an analytical and inquiring mind, I am tenacious enough to chase down what is really happening in a situation, have a broad and deep appreciation of technology and it’s implementation, I have a great deal of project and programme experience across a number of Industry’s, and am good at getting people to tell me about the problems they are experiencing. I expect these are the type of qualities you would probably want to encourage to become better at this type of activity; an unkind person might say that being a pedant and didact can help too.
So I separate my reviews down into five simple areas:
- Problem(s)
- Fact(s)
- Result(s)
- Conclusion(s)
- Recommendation(s)
I bet you’re thinking “well that’s obvious Wayne”, but simplicity is always an imperative when you set out, because believe me, the complexity of the issues you’re going to find can sometimes seem overwhelming. So explaining what I mean by these five headings:
- Problem(s)Or rather perceived problem(s). Because this is what the client thinks is wrong or at the very least is the effect not the cause (unless they know or think they know what the problem is and just want an expert to confirm their opinions). If in doubt this should definitely be why you have been commissioned.
This section should not really be that large, because otherwise I would expect that items from the following sections have ‘leaked’ into here, most likely from the ‘fact(s)’ section. For instance if the MD of a company who has commissioned you to review a system starts telling you about all the individual issues that they are having then you are clearly in ‘gathering facts’ mode and much of this should end up in the next section.
I would typically expect this section to be only a paragraph or couple of paragraphs at the most, if it’s running to half a page or more I’d be concerned because in a large review clarity will matter a great deal. Even in a large scale, complex review, be careful of this section because if it is too large it could point to being too over-focused on detail which should be drawn out further in the review or a problem with the level of abstraction used and the problem description.
Examples of ‘Problems’ I’ve been asked to investigate include:
- We’ve spent £10s of Millions on our IT supplier and the web sites which they have built are still not available when they are needed to be, what is going wrong
- We’ve spent £30 Million plus to date on a data centre build out, which should be complete, and our IT supplier keeps telling us that it could be a month or a couple of months until it’s complete, but we have lost confidence in their updates.
- We have spent over £70 Million on a large integration project, which has yet to deliver it’s first release to the business and I’ve just been told it’s going to need another £10 Million immediately and another £40 Million to complete
- We are just under two years into a ten year, £300 Million pound a year contract, which has ‘ballooned’ up to £800 Million per year already, and yet our supplier still hasn’t delivered the ‘Transformation’ that they promised, what is stopping them
- Fact(s)Gather and document facts. This should easily be your largest section because data matters and you will need good data to make an appropriate diagnosis of the situation and to ensure you deliver a credible and believable review.
Obviously there are many ways to gather data, especially technically i.e. gather crash dumps, read through code, measure network, processing and storage performance and capabilities, etc. For non-technical fact gathering you can review contracts, documentation, investigate online and offline document repositories, review authorised and freely given email and communication trails and other ‘digital echos’ as you see is appropriate, etc.
By far the most effective means of gathering facts in a large scale and complex review is via interviews. In a large scale review you would expect to find the majority of fact gathering comes from interviewing. Inquiry, question development and delivery, structured interviewing and aware and active listening matter a lot here. Never lead an interview in case you are building a case for a theory or ‘pet’ view that you have. Remain impartial at all times.
It is important to be empathetic enough to be good at relating to people and getting them to open up when doing structured interviewing and active listening; if you are too proud or arrogant you can forget interviewing as a method of gathering data because it’s unlikely anyone will open up to you enough, and this will seriously impact your ability to perform reviews and audits in any meaningful manner.
People will be people in the interviews: they will be emotive, some may be reserved, stoic, cynical even, some will care, some won’t, a few may be objective, many are wittingly or unwittingly subjective, and all will have opinions.
Remember interviewing is the no.1 manner in which good quality data gathering is done for system, project and programme reviews and audits, becoming fluent in performing interviews and capturing the data thereof is key to performing good reviews.
Do not lead the data, nor start to analyse until a good body of data is gathered. Often once facts have been gathered and started to be analysed more information may be required to perform a good quality diagnosis. Be prepared to ask lots of questions and be prepared to meet people who don’t want to answer you. Document everything.
- Result(s)This is where you will be relating facts into results; although some analysis and thought will have been done considering which information to gather and how, much of the real analysis ‘foot work’ starts here. This section is where you relate the information so far presented and relate it to issues and problems that the client is experiencing. Hopefully you should now have information gathered and documented in the ‘fact(s)’ section which is causing or could be causing the problems that the client is exhibiting. It is likely that you will be sorting facts and the problems that they are associated with into a basic set of categorisations (and the next article in the series deals with those categories).
A simple example result may be that a defined problem might be “the web server keeps falling over, we don’t know why”, whilst the related fact may be that “patches were not applied”, after more investigation it would probably be fair to link the two together thus “a result of not applying the appropriate patches leads to the web servers being unstable”. The reason you shouldn’t jump the gun and stop with the first thing you come across is that it may not be the root cause, it could be a contributing cause or even unrelated, the good reviewer is appropriately thorough, without needlessly wasteful of clients time, money or resources.
An example of a conclusion might be “without implementing and maintaining change control the project will continue to move out of control and will be increasingly difficult to deliver to time and budget, never mind delivering the contractually required document deliverables”.
If the facts and results do not map to the original issue for which the exercise was commissioned, you would need to consider gathering more data which is more related to the original problem and re-iteratively gathering more information, alternately perhaps testing the validity of the original problem description and politely questioning the original area you were asked to review with the sponsor of the review (secure a meeting and let them know of the concerns and issues you are having). Document any disparity between the originally identified problems, the facts gathered and the results given here.
- Conclusion(s)Defining conclusions is where you look at the Facts and Results and conclude what will happen if the situation continues. This is where you make rational predictions on a future state, suggesting what problems might occur in the future if no action (or what planned action) is taken. It would be dismissive to say that this is where any ‘scaremongering’ occurs, but it is important to inform, and, possibly even, warn the client about further problems and issues that they might experience if the situation remains unchecked.
It is important that your conclusions address the original problem and although you may like to address any additional problems which have been drawn out during the review it is not imperative to do so; however I almost always do and this is because I feel an obligation to the client and I want to demonstrate my delivery focus too, you however may not find this something you have time, or want, to do.
Again this will probably be a short section, and although you well may have been creative before, this and the next section is where your good ideas need to start to be. You need to ensure that you are not too fanciful, and personally I prefer not to be seen to be influencing any particular recommendation by ‘weighting’ any possible future state too negatively however I have seen a lot of of reviews which have lacked impartiality over the years.
If things are bad you must be honest and deliver the difficult news; whatever you do do not attempt to ‘sugar-coat’ the news and detract from the important information and messages you are delivering to the business; although I heavily recommend that you ensure that you inform your sponsor verbally early on so as to ensure you do not deliver any surprises which could have a possible negative effect and lose or diminish their support.
- Recommendation(s)This is where you make suggestions in terms of trying to improve the situation; deliver recommendations which relate to the facts, results and conclusions, and the original problem. If other problems have come to light during the review and you have included them as part of the overall review then you should include recommendations which address those problems as well.
Making recommendations could well be seen as being the easy part to an experienced ‘expert’ with a certain field, and it is always attractive to the inexperienced reviewer to dive in with recommendations before proper analysis has been completed (i.e. we’ve found these facts, therefore because the last project had a similar issue and we fixed it by doing X, Y and Z, we will try X, Y and Z here). This behaviour will likely lead to either the wrong problem being fixed, or worsening the current situation, all of which waste the clients time, money and resources.
With recommendations I like to remember the ‘pareto principle’ (the “80-20 rule”), in that your principal recommendations should by mindful of this and have significant impact on addressing the problem space originally described by the client. Minor recommendations are all well and good, but if they don’t “fix” the problem in the mind of the client it’s unlikely that you will be being asked to review for them again or that your recommendations are implemented at all.
Above all recommendation are given so as to improve a situation, not to push any personal agenda, and again it is key to be impartial and objective.
The biggest problem you will likely have in using a framework like this, or any other, is early on you will likely have content in the following or proceeding section to the one it should be in, as you become more familiar with the framework and more experienced at doing reviews and audits this should improve.
Also do not imagine that the only place that you bring value is in the ‘Recommendation(s)’, this is grossly incorrect, because the client may well have not gathered the data you have, analysed it in that same way, nor come to the same conclusions. Your work ultimately will improve their understanding of a situation and allow them to plan accordingly, and this is the genuine value.
Of course a good review document will contain more than the above, probably references, appendices, document control, etc., however the above is the absolute core of a good review in my opinion and experience. If you find yourself arguing with your co-reviewers about the document version control table you are way off the mark, because fundamentally the quality of the review is paramount as well as the effect it brings (hopefully a resolution to the issues for which it was commissioned).
My friend, Chris Loughran, of Deloitte, use a framework even more stripped down and ‘lean’ than this, delineating into (1) gather facts, (2) relate results, and (3) make conclusions, which is certainly punchier and easier to explain in short order to your typical senior executive or CxO who has very limited time. But of course Chris is one of the leading Business and Technology Consultants in the UK, so this is to be expected and he is highly effective using this approach. Personally, as I’ve written about in this article, I prefer to document the (perceived) problem and to ensure recommendations are distinct from conclusions.
As usual, I hope you have enjoyed the article despite it being a lot larger than I hoped, and to mention that the next one is looking at the categories of reasons why projects and programmes fail (although I’ve just decided to deliver and have subsequently written a short article documenting an example of the above review and audit framework too).
- Recovered link: https://horkan.com/2009/07/19/project-review-programme-audit-framework
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/project_review_programme_audit_framework
- Original link:
http://blogs.sun.com/eclectic/entry/project_review_programme_audit_framework
Reviewing, auditing and rescuing failing projects and programmes best practice
One of the things I’ve done rather too much of is be asked to review failing projects, programmes and build-outs for customers, clients and partners, and come up with solutions and recommendations to help resolve these problems, which is often followed in short order by being asked to help rescue them (often leading to Sun helping them too).
And over the years I’ve built up a fairly large body of case studies and examples, which when I make the material a little more anonymous and write up I will share, but for now I’ve put together a couple of articles that use this experience.
First one to follow this leader is an article on a project review / programme audit framework which is a simple, highly effective and yet generic method for setting out reviews.
Secondly is a piece on why project fail, at least the five macro-level reasons why projects fail, within which I’ve found all programme problems seem to fall. This is of an appropriately high level to be useful to those who review and audit problem implementations and systems, don’t expect to find items such as “it was a triple indirected pointer to a function in C / C++ that ended up at the wrong memory location”.
Anyway really hope you enjoy the articles, because, well frankly, there is a lot of time, effort, experience, and failing projects knowledge boiled down into these.
- Recovered link: https://horkan.com/2009/07/18/project-review-programme-audit-experience
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/project_review_programme_audit_experience
- Original link:
http://blogs.sun.com/eclectic/entry/project_review_programme_audit_experience
Hosting a BCS Enterprise Architecture event on EA Tools tomorrow in Manchester
Tomorrow evening I’m going to be hosting a BCS Enterprise Architecture (EA) speciality group (SG) event looking at EA tools in Manchester.
Details for the event:
Date / Time: Friday the 17th of July, 2009 / 17:45 for 18:00, expected to finish at 19:30
Location: Room E0.05, John Dalton Building, Manchester Metropolitan University, Chester Street, Manchester M1 5GD
More information and registration over at the BCS EA AG website page for the event: http://www.ea.bcs.org/eventbooking/showevent.php?eventid=esg0906
Event synopsis from the BCS EA SG:
The focus of enterprise modelling is now shifting away from purely technical and system aspects and becoming more holistic, thereby necessitating the use of comprehensive modelling tools to analyse and optimise the portfolio of business strategies, organisational structures, business processes, information flows, and services. Organisations would be ill advised to proceed with Enterprise Architecture without utilising a modelling toolset able to meet all the requirements. A central repository is vital to provide a common information source for everyone involved, also important is the ability to base the process on a framework and customise the models to fit each organisations own situation. The presentation will cover Enterprise Architecture tool evolution, key tool capabilities, and market overview.
We have secured Mark Blowers as the nights speaker; Mark has over twenty years experience in the IT industry, employed by end-users and software houses, working in a number of roles, from analyst/programmer to project manager and account manager, in the manufacturing, retail, and Independent Software Vendor (ISV) sectors. Mark joined Butler Group in August 2000, and is Enterprise Architectures Practice Director. He has worked on a number of strategic and architectural themes over the last few years, including Enterprise Architecture, IT Value Management, Enterprise Communications, and Mobility. Mark has been widely quoted in the press, including the Financial Times, Guardian, Computing, Computer Weekly, Internet news sites, and other trade magazines.
This should be an interesting and informative event, so I’m looking forward to being there already.
- Recovered link: https://horkan.com/2009/07/16/hosting-enterprise-architecture-tools-20090617
- Archived link: https://web.archive.org/web/20100713051548/http://blogs.sun.com/eclectic/entry/hosting_enterprise_architecture_tools_20090617
- Original link:
http://blogs.sun.com/eclectic/entry/hosting_enterprise_architecture_tools_20090617
Cloud Computing reading recommendations from Jim Baty
Here’s a few Cloud Computing reading recommendations from Jim Baty, Senior Vice President and Chief Architect for Global Sales and Services. I’ve had these for a couple of months now but I thought I’d post them anyway as they are well worth a read.
some reading that folks are talking about
clouds from Berkeley / Patterson
http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf
data intensive supercomputing
http://www.cs.cmu.edu/~bryant/pubdir/cmu-cs-07-128.pdf
implementation comparison of GAE & AWS
http://www.slideshare.net/mastermark/fowa-miami-09-cloud-computing-workshop-1059049
- Recovered link: https://horkan.com/2009/06/22/cloud-reading-recommendations-jim-baty
- Archived link: https://web.archive.org/web/20100531082020/http://blogs.sun.com/eclectic/entry/cloud_reading_recommendations_jim_baty
- Original link:
http://blogs.sun.com/eclectic/entry/cloud_reading_recommendations_jim_baty
Itex and Sun thought leadership event ‘A Computing Revolution: Why Cloud Computing Changes Everything’ in Guernsey on Tuesday the 2nd of June, 2009
So whilst a number of my colleagues, friends and peers at Sun are off enjoying JavaOne, the no.1 Java event of the year, and CommunityOne West, I’m going to be in Guernsey, where I’m keynoting at the Itex / Sun Thought Leadership event “A Computing Revolution: Why Cloud Computing Changes Everything“.
During the event we’ll be looking to cover the following topics:
- What is Cloud Computing and why does it matter?
- Will it benefit my department or business and how will it impact the IT function?
- What are the applications of Cloud Computing, today and in the future?
- What will it mean for small niche jurisdictions such as Guernsey?
- What is the role of Government in facilitating the new computing environment?
I’d just like to say ‘Many Thanks’ to everyone at Itex who helped organise this event, especially Daniel Fitton, Richard Parker and Chris Eaton, and to Paul Tarantino and Greg Roberts, of Sun UK’s Internet and Web2.0 team, who originally put me forward as guest speaker.
Details for the event are:
- Date: Tuesday the 2nd of June, 2009
- Time: 8:15am to 9:00am for Breakfast, 9:00am to 10:30am for the event
- Location: Old Government House, St Peter Port, Guernsey
- Registration: To reserve a place email events@itexoffshore.com or call 01481 710881
Itex have created a flyer for the event, which is available here (in PDF format). Event page is: http://www.itexoffshore.com/NewsAndEvents/Events/May+2009/Cloud_May09.htm
I really shouldn’t say be saying this, as JavaOne is big news in the IT Industry, especially at Sun, and has an incredibly exciting line up of speakers and agenda this year, but I’m looking forward to being in Guernsey more.
Frankly I’ve been to the States a fair amount with work and conferences but I’ve never been to Guernsey yet; I’m really looking forward to going and think I’ll have a great time too. Expect to see slides, write up and photos sometime next week.
Shock! New report says IT Management don’t care about Power Efficiency
Shockingly the latest report from Forrester Research effectively ends up telling us exactly what we all know already; that the majority of CIOs, CTOs, and other IT leadership and operations management, are not interested in power saving.
As reported recently by The Register in the article “Study finds IT heads not interested in power saving” (available here: http://www.theregister.co.uk/2009/04/30/pc_power_saving/) which confirms what most of us in the IT industry know to be true, in that in the majority of cases because power consumption comes under the remit of Facilities Management in most organisations the IT department is not responsible for paying for Power Consumed (whether that be for compute, storage and network infrastructure itself, or the cooling equipment that is required to maintain that infrastructure either) and so has no reason to be concerned about the size of the companies power bills (or the effect of poor IT power efficiencies on those bills).
Also in almost all companies the Facilities Management department is much larger, and has a much larger budget, than the IT department; easily often in the magnitude of ten times that of the IT department (in some organisations the IT department is part of the Facilities department, and we most often encounter this model when the organisation in question sees IT purely as organisational ‘infrastructure’ and tends not to see IT as a means to deliver competitive advantage).
Encouraging IT management to be concerned about power efficiency is still highly problematic whilst the IT department is not accountable for managing that Cap-Ex spend, although things are getting better, albeit slowly. Day to day I see large numbers of IT departments and management thereof being set targets for power savings, however I infrequently see any genuine penalties or incentives that ensure these targets are even remotely met (in most cases I see IT departments focus being that of maintaining business critical systems, especially during processing runs, whilst still attempting to build out new functionality at the same time, how little things have really changed).
What constantly amazes me are the number of organisations planning, and determined to, build out new data centre facilities, even now during the downturn. Many of these organisations would be much more sensible to look at refreshing there existing infrastructure, reducing server footprint, getting better energy efficiency and performance, as long as the risk impact and analysis of risk is low, and possibly even reducing their data centre footprint, but that would mean shrinking peoples corporate ‘power bases’ and personal ’empires’ and so often receives a lack of genuine support.
Frankly this would become an important topic if those responsible for the facilities budget where also responsible for the IT budget, but this is rarely the case; IT usually reports to Operations (which may also contain facilities), Finance, or occasionally even the the Main Board or Marketing (including Sales), followed rather infrequently by facilities (this becomes more complex when looking at the IT departments remit, and whether they have significant influence, or control, over the application development team and the business analysts from the profit generating business units).
The most obvious answer would be to get IT and Facilities to work much more closely together, and at least be set joint targets, which are ‘SMART’ (stands for “Specific, Measurable, Achievable, Relevant, Time-framed”). The other that I’ve heard becoming more popular recently has been to redirect Facilities budgets to IT departments for them to run technology refresh programmes, with a recent example looking at an unprecedented 10% of Facilities budget being transferred to IT, nearly doubling that IT departments budget for the year.
Personally I don’t think this will be addressed well in the short term, but I’m hopeful that using budget earmarked for Facilities for Technology Refresh, and planning facilities reductions becomes a more widely recognised and sensible approach to help drive down the amount of energy consumed by the technology at use within enterprises, because, frankly, something needs to be done to reduce enterprise consumption of power and space resources.
Links for this article: