Category Archives: tech

Secure Your Site: A Comprehensive Guide to WordPress Backup and Restoration

Backing up and restoring a WordPress website is a critical task for website administrators, ensuring that website data is not lost due to unforeseen circumstances such as server crashes, hacking, or accidental deletions. This article will guide you through the processes involved in backing up and restoring your WordPress website, an overview of popular backup and restore plugins, help you to choose the appropriate backup and restore approach, and hopefully help you recover your site quickly and efficiently when needed.

Continue reading

The Test of Commitment and The Nuances of Estimation: Lessons from the IT Trenches

Across three decades of diving deep into the intricacies of IT, from software engineering to enterprise architecture, there’s a multitude of lessons I’ve unearthed. However, two recent experiences bring to light some invaluable insights about software delivery and the broader strokes of business ethos.

Consider a scenario where your project hinges on the delivery timeline of an external firm. A firm that’s been given two years to code a mere API HTTP PUT request, with the deadline stretching from mid to end September. The stakes? The very funding of your project. Enter the vendor’s sales representative: brimming with confidence, he assures an on-time delivery. Yet, when playfully challenged with a wager to affirm this confidence, he declines. A simple bet revealing the chasm between rhetoric and conviction.

Such instances resonate with an enduring truth in software delivery and business: actions always echo louder than words. The real “test” in business is not just about meeting estimates or deadlines, but about the conviction, commitment, and authenticity behind the promises.

However, alongside this test of commitment, lies another challenge I’ve grappled with regardless of the cap I wore: estimating software delivery. Despite my extensive track record, I’ve faced moments when estimates missed the mark. And I’m not alone in this.

Early in my career, Bill Vass, another IT leader, imparted a nugget of wisdom that remains etched in my memory. He quipped, “When it comes to developer estimates, always times them by four.” This wasn’t mere cynicism, but a recognition of the myriad unpredictabilities inherent in software development, reminiscent of the broader unpredictabilities in business.

Yet, the essence isn’t about perfecting estimates. It revolves around three pillars: honesty in setting and communicating expectations; realism in distinguishing optimism from capability; and engagement to ensure ongoing dialogue through the project’s ups and downs.

In the grand tapestry of IT and business, it’s not always the flawless execution of an estimate or a delivered promise that counts. At the end of the day, an estimate is just that; an estimate. The crux lies in how we navigate the journey, armed with authenticity, grounded expectations, and unwavering engagement. These cornerstones, combined with real-world lessons, are what construct the foundation of trust, catalyse collaborations, and steer us toward true success.

Comparing Technical Proving, MVP, and Spike in Enterprise Architecture

Introduction

As enterprise architects navigate the complex landscape of delivering value and mitigating risks, different approaches come into play. Two prominent methods, Technical Proving and Minimum Viable Product (MVP), offer unique benefits in enterprise architecture. Additionally, the concept of a “spike” provides a focused investigation to address specific uncertainties. In this article, we will compare Technical Proving and MVP while also discussing the characteristics and purpose of a spike, offering insights into their respective roles in enterprise architecture.

Technical Proving

Validating Technical Concepts Technical Proving involves building small-scale prototypes or proofs of concept to validate the feasibility and viability of technical concepts. Its primary objective is to evaluate technical aspects such as architecture, frameworks, performance, scalability, and integration capabilities. By identifying potential risks early on, architects can make informed decisions and mitigate any issues that may arise during implementation.

Benefits of Technical Proving

  1. Risk Mitigation: Technical Proving minimizes risks by validating technical concepts before full-scale implementation. It helps identify potential roadblocks or challenges, enabling proactive mitigation.
  2. Informed Decision-Making: By rapidly prototyping technical elements, architects gain valuable insights into the feasibility of various solutions. This knowledge empowers them to make informed decisions and streamline the development process.
  3. Resource Optimization: Technical Proving ensures efficient resource allocation by focusing on high-potential solutions and discarding unfeasible options. It prevents unnecessary investments in non-viable concepts.

Minimum Viable Product (MVP)

Delivering Value and Gathering Feedback MVP is an approach that involves developing a functional product with minimal features and capabilities to address a specific problem or deliver immediate value to users. The primary goal of an MVP is to obtain feedback from early adopters and stakeholders, enabling architects to iteratively refine and enhance the product based on real-world usage and user input.

Benefits of MVP

  1. Early Validation: By releasing a minimal version of the product, architects can validate their assumptions and gather valuable feedback. This enables quick iterations and improvements, enhancing the chances of success in the market.
  2. Cost Efficiency: MVPs focus on delivering essential functionality, reducing development costs and time-to-market. By avoiding extensive upfront investment in unnecessary features, resources can be allocated more effectively.
  3. User-Centric Approach: MVPs prioritize user feedback and involvement, ensuring that the final product aligns closely with user needs. This customer-centric approach improves user satisfaction and increases the chances of successful adoption.

The Role of a Spike

In addition to Technical Proving and MVP, another approach called a spike plays a distinct role in enterprise architecture. A spike is an exploratory investigation that focuses on addressing specific uncertainties or concerns, usually in a time-bound and limited-scope manner. Unlike Technical Proving and MVP, a spike is not intended for broad validation or market testing but rather for gathering targeted knowledge or data.

Characteristics of a Spike

  1. Targeted Investigation: Spikes focus on exploring a specific area of concern or uncertainty, providing deeper insights into a particular problem or technology.
  2. Time-Bound: Spikes have a fixed timeframe allocated for the investigation, ensuring focused and efficient efforts.
  3. Learning and Discovery: The primary goal of a spike is to gather knowledge and insights that can guide decision-making and inform subsequent development efforts.

Differentiating Spike from Technical Proving and MVP

While Technical Proving and MVP serve broader purposes, spikes are narrow and point-specific investigations. Technical Proving validates technical concepts, MVP delivers value and gathers feedback, while spikes focus on targeted exploration to address uncertainties.

Conclusion

In the realm of enterprise architecture, Technical Proving and MVP offer valuable approaches for validating concepts and delivering value. Technical Proving mitigates technical risks, while MVP emphasizes user value and feedback. Additionally, spikes provide focused investigations to address specific uncertainties. Understanding the characteristics and appropriate use cases of these approaches empowers architects to make informed decisions, optimize resource allocation, and increase the chances of successful outcomes in enterprise architecture endeavours.

John Zachman, father of Enterprise Architecture, to present at the next BCS Enterprise Architecture Speciality Group event on Tuesday the 6th of October, 2009

Wow! The BCS Enterprise Architecture Speciality Group has secured John Zachman, the de facto father of Enterprise Architecture, and creator of the “Zachman Framework for Enterprise Architecture” (ZFEA), to speak at its next event on Tuesday the 6th of October: Talk about a major coup. The BCS EA SG is really getting busy and is the fastest growing BCS Speciality Group I’ve seen so far, with 750+ members, and is gaining new members on a daily basis.

Come along and see John speak about “Enterprise Design Objectives – Complexity and Change”, at the Crowne Plaza Hotel, 100 Cromwell Road, London SW7 4ER on Tuesday the 6th of October, 2009. You can register your place here: http://www.ea.bcs.org/eventbooking/showevent.php?eventid=esg0908

Of course, a serious advantage of the BCS EA SG is that it is framework agnostic, and as such can look at best practices and framework capabilities from across the EA community. In fact, less than six months ago a preceding event was an update on the recently released TOGAF 9 standard from the Open Group (typically seen as one of the other major Frameworks, alongside ZFEA, although you often encounter Organisation using a blended, best-of-breed, approach when it comes to EA implementation).

The BCS EA SG has got some other great events lined up, and I’m especially looking forward to hearing “Links with other IT disciplines such as ITIL and strategy” on Tuesday the 15th of Decemeber, 2009, over at the BCS London head quarters at 5 Southampton Street, London WC2E 7HA. Details of this event are still being confirmed, but it’ll be great to see how thoughts on mapping major capabilities to EA match with my own (I’ve been doing rather a lot in terms of co-ordinating EA, Service Management and Portfolio Management lately). Plus since TOGAF 9 removed the genuinely useful appendices showing mappings between TOGAF, ZFEA, and other disciplines and frameworks, promising to have them published as stand alone white papers, it’s great to know that experience and knowledge in this important area has not been forgotten and in fact is being collated and compiled by the BCS EA SG team.

I’m really looking forward to seeing John speak on the 6th and if you can make it I hope to see you there too! And please do come over and say “Hi” if you get chance.

The problem with automated provisioning (III of III)

The third of my articles on the macro-level issues with (automated) provisioning, which build on the previous articles, specifically the comparison of Enterprise versus “web scale” deployments described in “The problem with automated provisioning (II of III)” and the levels of complexity, in terms of automated provisioning, set up and configuration that is required.

As I’ve said before in this series of articles provisioning a thousand machines which all have the same OS, stack and code base, with updated configuration information is easier to set up than a thousand machines which use a mixture of four or five Operating Systems, which all have differing patch schedules, patch methods and code release schedules, with a diverse infrastructure and application software stack and multiple code bases. And to express this I’ve postulated the equation “(Automated) Provisioning Complexity = No. of Instances x Freq. of Change”.

What I’d like to move the focus over to is that of runtime stability and the ability of a given system to support increasingly greater levels of complexity.

I find that it is important to recognise the place of observation and direct experience as well as theory and supposition (in research I find it’s useful to identify patterns and then try to understand them).

Another trend that I have witnessed in regards to system complexity, including the requirement to provision a given system, is that the simpler and more succinct a given architectural layer, the more robust that layer is and more able to support layers above it which have higher levels of complexity.

Often Architectural layers are constrained in terms of there ability to support (and absorb) high numbers of differing components and high rates of change by the preceding layer in the stack. AKA the simpler the lowest levels of the stack the more stable they will be and thus more able to support diverse ecosystems with reasonable rates of change in the layers above them

The more complex the layer below the less stable it is likely to be (given the number of components and instances thereof and the rate of update which significantly drive up the level of complexity of the system).

This phenomenon is found in the differing compute environments I’ve been speaking about in these short articles, and again they affect the ability of a given system to be provisioned in any succinct and efficient manner.

More accurate Enterprise

Typically Enterprise IT ecosystems are woefully complex, due to a mixture of longevity (sweating those assets and risk aversion) and large numbers of functional systems (functional as in functional requirements) and non-functional components (i.e. heterogeneous infrastructure, with lots of exceptions, one off instances, etc.).

Subsequently they suffer from the issue that I’ve identifioed above, that is as lower levels are already compolex, they are constrained in the amount of complexity that can be supported at the level above, the accompanying diagram demonstrates the point.

the-problem-with-provisioning-0.1-real-enterprise

More accurate Web Scale

Whilst Web Scale class systems often exhibit almost the opposite behaviour. Given they often use a radically simplified infrastructure architecture anyway (i.e. lots of similar and easily replaceable common and often ‘commodity’ components) in a ‘platform’ approach, there isn’t often the high levels of heterogeneity that you see in a typical Enterprise IT ecosystem (homogeneous). And this approach is often found in the application and logical layers above the infrastructure, i.e. high levels of commonality of software environment, used as an application platform to support a variety of functionality, services, code and code bases.

Subsequently, because of the simple nature of low level layers of the architecture they are much more robust and capable of withstanding change (because introducing change into a complex ecosystem often leads to something, somewhere breaking, even with exceptional planning). This stability and robustness ensures that the overall architecture is better equipped to cope with change and with the frequency of change, and that layers of high levels of complexity can be supported.

the-problem-with-provisioning-0.1-real-web

And so that concludes my articles on provisioning, and the problems with it, for the time being, although I might edit them a little, or at least revisit them, when I have more time.

The problem with automated provisioning (II of III)

The second of my articles on the macro-level issues with (automated) provisioning and focusing again on the theme of complexity as a result of “No. of Instances” x “Freq. of Change” described in the previous article “The problem with automated provisioning (I of III)“, but this time comparing an Enterprise Data Centre build-out versus a typical “Web Scale” Data Centre build-out.

Having built out both examples demonstrated I find the below a useful comparison when describing some of the issues around automated provisioning and occasionally why there are misconceptions about it from those who typically deliver systems from one of these ‘camps’ and not the other.

Enterprise

the-problem-with-provisioning-0.1-enterprise

Basically the number of systems within a typical Enterprise Data Centre (and within that Enterprise itself) is larger than that in a Web Scale (or HPC) Data Centre (or supported by that organisations), and the differing number of components that support those systems is higher too. For instance at the last Data Centre build out I led there were around eight different Operating Systems being implemented alone. This base level of complexity, which is then exasperated because of the Frequency of having to patch and update this (as demonstrated by “Automated Provisioning Complexity = No. of Instances x Freq. of Change” equation) significantly impacts any adoption of automated provisioning (it makes defining operational procedures more complex too).

Web Scale

the-problem-with-provisioning-0.1-web

Frankly a Web Scale build out is much more likely to use a greater level of standardisation to be able to drive the level of scale and scaling required to service the user requests and to maintain the system as a whole (here’s a quote from Jeff Dean, Google Fellow, “If you’re running 10,000 machines, something is going to die every day.”). This is not to say that there is not a high level of complexity inherent in these types of system, it’s just that in order to cope with the engineering effort required to ensure that the system can scale to service many hundreds of millions of requests it may well require a level of component standardisation well beyond the typical you’d see in an Enterprise type deployment (where functionality and maintenance of business process is paramount). Any complexity is more likely to be in the architecture to cope with said scaling, for instance distributed computational proximity algorithms (i.e. which server is nearest to me physically so as to reduce latency versus which servers are under most load so as to process the request as optimally as possible), or in the distributed configuration needed to maintain said system as components become available and are also de-commissioned (for whatever reason).

Automated Provisioning Complexity = No. of Instances x Freq. of Change

At the most base level provisioning a thousand machines which all have the same OS, stack and code base, with updated configuration is easier to set up than a thousand machines which use a mixture of four or five Operating Systems, which all have differing patch schedules and patch methods, with a diverse infrastructure and application software stack and multiple code bases. I suspect that upon reading this article you may think that it was an overtly obvious statement to make, but it is the fundamentals that I keep seeing people trip up on over and over again which infuriates me no end, and so, yes, expect another upcoming article on the “top” architectural issues that I encounter too.

HPC, or High Performance Computing, the third major branch of computing, build-outs usually follow the model above for that of “web scale” ones. I have an upcoming article comparing the three major branches of computing usage, Enterprise, Web Scale, and HP, in much greater detail, however for the time being the comparison above is adequate to demonstrate the point I am drawing to your attention; that of complexity of environment exasperating implementation of an automated provisioning system. Hope you enjoyed this article, it is soon to be followed by a reappraisal and revised look at Enterprise and Web Scale provisioning.

The problem with automated provisioning (I of III)

Referring back to my previous article “The problem with automated provisioning – an introduction” once you get over these too human of issues into the ‘technical’ problem of provisioning then I’d have been much nearer the mark in my initial assessment, because it is indeed an issue of complexity. The risks, costs, and likely success, of setting up and maintaining an automated provisioning capability is integrally linked to that of the complexity of the environment to be provisioned.

There are a number of contributing factors, including, number of devices, virtual instances, etc., location and distribution from the command and control point, but the two main ones in my mind are “Number of Instances” and “Frequency of Change”.

And so ‘Complexity’, in terms of automated provisioning, at a macro level, can be calculated as being “Number of Instances” versus “Frequency of Change”.

No. of Instances x Freq. of Change

By “Number of Instances” I mean number of differing operating systems in use, number of differing infrastctrue applications, number of differing application runtime environments and application frameworks, number of differing code bases, number of content versions being hosted, etc.

By “Frequency of Change” I am drawing attention to patches, code fixes, version iterations, code releases, etc., and how often they are delivered.

The following diagram demonstrates what I frequently call ‘The Problem with Provisioning’; as you can see I’ve delineated against three major architectural “levels”, from the lowest and nearest to the hardware, the OS layer which also contains ‘infrastructure software’, the Application layer, containing the application platform and runtime environment, and the “CCC” layer containing Code, Configuration and Content.

the-problem-with-provisioning-0.1-overview

In a major data-centre build-out it is not atypical to see three, four or even more, different operating systems being deployed, each of which is likely to require three or six monthly patches, as well as interim high value patches (bug fixes that effect the functionality of the system and security patches). Furthermore it’s likely the number of ISV applications, COTS products, and application runtime environments will be much higher than the number of OS instances, and that the amount of “CCC” instances will be even higher.

I find it important to separate the system being provisioned into these three groupings because, typically they require differing approaches (and technologies) for the provisioning thereof, something I mentioned in the previous article when organisations mistakenly believe that the provisioning technology that they have procured will scale the entire stack, from just above ‘bare metal’ to “CCC” changes (I’ve seen this issue more than once, even by a Sun team who should of known better, albeit it was around three years ago).

This model brings to the fore the increasing level of complexity, both of components at each layer, and the frequency of changes that then occur, and although the model above is a trifle simplistic, it is useful when describing the issues that one can encounter with implementing automated provisioning systems, especially to those with little knowledge or awareness of the topic.

The problem with automated provisioning – an introduction

I was going to start this short series of articles with the statement that the problem with provisioning is one of complexity, and I’d have been wrong, the predominant issues with provisioning, and specifically automated provisioning, are awareness and expectation.

Awareness and Expectations

The level of awareness of what can actually be done, and often, more importantly, what cannot be done, with automated provisioning, or even what automated provisioning actually “is” is a significant barrier, followed by the expectations set, both by end users with a hope for IT “silver bullets”, who may well have been oversold, and Systems Integrators, product vendors and ISVs who sadly promise a little too much to be true or are a trifle unaware of the full extent of their own abilities (positivity and confidence aside).

For instance I was once asked to take over and ‘rescue’ the build out of a data centre on behalf of a customer and their outsourcerer (£30M+ to build out, estimated £180M total to build and run for the first five years).

Personally I would say that this data-centre build out was of medium complexity, being made up of more than five hundred Wintel servers, circa three hundred UNIX devices, and around two hundred ancillary pieces of hardware including network components, firewalls, switches, bridges, intelligent KVMs and their ilk, storage components, such as SAN fabric, high end disk systems, such as Hitachi 9900 range, high end tape storage, etc., and other components.

One of the biggest problems in this instance was that the contract between client and vendor stipulated using automated provisioning technologies, not a problem in itself, however an assumption had been made, by both parties, that the entire build out would be done via the provisioning system, without a great deal of thought following this through to it’s logical conclusion.

Best to say here that they weren’t using Sun’s provisioning technology, but the then ‘market leader’, however the issues were not to do with the technology, nor functionality and capabilities of the provisioning product. It would have been as likely the similar problems would have been encountered even if it had.

This particular vendor had never implemented automated provisioning technologies before on a brand new “green-field” site, they had always implemented them in existing “brown-field” sites, where, of course, their was an existing and working implementation to encapsulate in the provisioning technology.

As some of the systems were being re-hosted from other data-centres (in part savings were to be made as part of a wider data-centre consolidation), another assumption had been made that this was not a fresh “green-field” implementation, but a legacy “brown-field” one, however this was a completely new data-centre, moving to an upgrade of hardware and infrastructure, never mind later revisions of application runtime environments, new code releases, and in-part enhanced, along with, wholly-new functionality too. AKA this was not what we typically call a “lift and shift”, where a system is ‘simply’ relocated from one location to another (and even then ‘simply’ is contextual). Another major misconception and example of incorrectly set expectation was that the provisioning technology in question would scale the entire stack, from just above ‘bare metal’ to ‘Code, Configuration and Content’ (CCC) changes, something that was, and still is extremely unlikely.

Sadly because of these misconceptions and lack of fore-thought predominantly on behalf of the outsourcerer no one had allowed for the effort to either build-out the data-centre in entirety and then encapsulate it within the provisioning technology (a model they had experience of, and which was finally adopted), nor allow for the time to build the entire data-centre as ‘system images’ within the provisioning technologies and then use it to implement the entire data-centre (which would have taken a great deal longer, not only because testing a system held as system images would have been impossible, as they would have to be loaded into the hardware to do any testing, either testing of the provisioning system, or real world UAT, system, non-functional, and performance testing).

Unsurprisingly one of the first things I had to do when I arrived was raise awareness that this was an issue, as it had not fully been identified, before getting agreement from all parties on a way forward. Effort, cost, resources, and people, were all required to develop the provisioning and automated provisioning system in a workable solution. As you can guess there had been no budget put aside for all of this, so the outsourcerer ended up absorbing the costs directly, leading to increased resentment of the contract that they had entered into and straining the relationship with the client, however this had been their own fault because of lack of experience and naivete when it came to building-out new data-centres (this had been their first so they did a lot of on the job learning and gained a tremendous amount of experience, even much of this was how not to build out a data centre).

This is why I stand by the statement that the major issues facing those adopting automated provisioning is one of awareness of the technology and what it can do and one of expectations of the level of transformation and business enablement it will facilitate, as well as how easy it is to do. The other articles in this series will focus a little more on the technical aspects of the “problem with provisioning”.

Jakob Nielsen: “Mobile User Experience is Miserable”

Latest research into mobile web user experience says that overall the experience is “miserable”, and cites the major issues with Mobile web usages, as well as looking at overall “success” rates which, although improved from results of research in the mid-1990’s are much lower than typical PC and workstation results.

It is well worth a read for those looking at optimising for mobile readership and audience and the full report is available here: http://www.useit.com/alertbox/mobile-usability.html

This new report names two major factors to improving the aforementioned success rates; that is sites designed specifically with mobile use in mind and improvement and innovations in phone design (smart phones and touch screens perform best).

Jakob Nielsen, ex-Sun Staff member and Distinguished Engineer is famous for his work in the field of “User Experience”, and his site is a key resource to getting advice and best practice in terms of web, and other types of, user experience design.

Bill Vass’ top reasons to use Open Source software

You might not have seen Bill Vass’ blog article series on the topic of the top reasons to use and adopt Open Source software; and as it’s such an insightful series of articles I thought I’d bring it to your attention here.

Each one is highly data driven and contains insight that you probably haven’t seen before but is useful to be aware of when positioning Open Source to a CTO, a CIO or an IT Director, because of Bill viewpoints (having come from a CIO and CTO background). Often when you see this sort of thing written it can be rather subjective, almost ‘faith based’, so I’m always on the lookout for good factual information that is contextually relevant.

Bill Vass’ top reasons to use and adopt open source:

    1. Improved security

    1. Reduced procurement times

    1. Avoid vendor lock in

    1. Reduced costs

    1. Better quality

  1. Enhance functionality

And before you mention it, I know Bill already summarised these articles in his lead-in piece “The Open Source Light at the End of the Proprietary Tunnel…“, but it was such a great set of articles it seems a shame not to highlight them them to you!