Filtering the signals from the noise to understand digital

“Digital” is generating a lot of distracting chatter that needs to be separated from the core concepts that make up digital. To filter the signals from the noise, let’s focus on the four stages of digital technology creation: design, develop, deliver and operate.


Before making the first jump and heading into a long and drawn-out Design phase, it should be noted that Design and Develop need to be closely coupled. In some cases, in the very early phases you could hold off design and move into just developing something based on a simple ideation or user story. But be careful; design is a critical part and needs to be done before you step too far down the life cycle.

The design process for a system in a digital world must:

  1. Start quickly with the big picture, broken down into smaller chunks to guide developers.
  2. Be strongly influenced by end-user thinking (empathizing) using Design Thinking principles.
  3. Incorporate feedback from the Develop and Deliver phases, addressing and fixing issues and features requests continuously.
  4. Be highly collaborative, engaging developers and engineers early to reduce risk, build trust and leverage knowledge.
  5. Apply good practices to ensure the design can adapt quickly with minimal impact to the business. The goal is “design for operations” using automation and intelligence at scale for frictionless adoption.


Considerations for moving to a DevOps model include:

  1. Decide on which DevOps model (there are many). Consider your business model and operating structure.
  2. Don’t be too prescriptive. The value of development in a digital world is allowing developers to have autonomy, using the guardrails and requirements from the Design phase. Remember to feed findings back into the Design phase.
  3. Do not confuse the multiple stages of development (alpha, beta, prototype, pilot). A simple rule is that the early stages are unconstrained, allowing developers to show the art of the possible. As you move towards later stages, the reality of operating in the business (compliance, regulations, integration) means you may have more constrained (structured) thinking. The Design phase should also consider this.
  4. Explore new ways of accelerating development. Open source, crowdsourcing, hackathons and buildathons are new ways to develop products. Some lend themselves to the unconstrained space (crowdsourcing, hackathons), while others are better suited to more structured development (open source, buildathons). Other considerations are coding styles; for example, Extreme Programming is a popular method.


In the digital era, deliver is a continuous motion in which delivery organizations must:

  1. Engage proactively in the development process rather than waiting for the output. Deliver needs to part of the development team, where knowledge is shared and Deliver teams are aware of the next release (i.e., “no surprises”).
  2. Deploy in small increments. This reduces risk, improves support and makes system changes a normal, daily task.
  3. Build trust with the business and customers. That is what happens when you deploy small incremental improvements. You upgrade your smartphone apps without fear; this is same feeling we need in the enterprise.
  4. Remove governance bottlenecks and traditional change control processes as trust builds. Test some rollouts, and once you are confident, replace processes with automated tests that the development team must validate against.
  5. Give feedback to development and design on future improvements and issue resolution.
  6. Structure teams and operations for high frequency by automating as much as possible to move fast and reduce the risk of failures.


The final stage, operating the new service, requires the system to be highly automated and almost completely self-serving. Teams must:

  1. Capture log data from all systems and applications. This data, although specific to an individual device, app or service, when combined with other logs can provide amazing insight into the operation of the end-to-end solution.
  2. Automate good processes and fix bad ones before automating them.
  3. Integrate operations and security to resolve problems faster. For example, a denial of service attack might start with a server or firewall outage. How much time is wasted sending this to the wrong teams to fix?

Linked life cycle

To determine if a service or product is truly digital, ask if it is designed, developed, delivered and operated using the new techniques. If it is not, eventually it will fall short of expectations.

The most critical aspect of the four stages is that they are linked. There are no gaps between the stages, no fences to throw things over, no long and drawn out governance models. Feedback loops ensure that issues are dealt with and improvements are made.

The optimal example of digital is when a product can flow from left to right (design –> develop –> deploy –> operate) as a single value chain, with information, teams and processes all interlocking and supporting the stages. Feedback is provided from the earlier stages to continuously improve the product.

Consider Google or Amazon Web Services, who bring new services to market at mind-boggling velocity. This is possible not just because they have a common platform, but because they have an operating model that supports this left-to-right motion that embraces the best practices, most of which they are writing or contributing towards.

Although the people aspects are just as important as the technology when shifting to a digital world, you should now be able to filter out the noise in terms of the technology to focus on becoming a digital business, and be able to evaluate services and products to see if they are deemed worthy of carrying the “Digital” brand.

The secret to digital transformation? Start by looking in the mirror








Digital is not just a technical thing. In fact, I would argue that technology can sometimes distract from true digital transformation.

So, what is digital?  It is the ability of a business to respond to or even predict change in a way that causes minimum disruption to the business, based on information from many sources (system and human, outside and in), and presented in a way that develops trust and understanding.

Digital transformation impacts three dimensions of a business:

  • People –Digital transformation impacts every level including employees, partners and clients. If you don’t believe this, you might be part of the problem.
  • Process – Business processes and the operating model are impacted by the new digital processes. How the company operates, make decisions and structures itself is key for success.
  • Technology – This is not the leading dimension but more a follower. Having platforms that support the speed of the organization, whilst seamlessly providing information to other systems and services, is critical.

A bit like the fire triangle, you need all three dimensions functioning to transform an organization into a truly digital one.

Organizational culture is king

Let’s focus on the big one, though: people. Suddenly organizational culture has become the big word on the street. Years ago, I raised the word culture as a mechanism for change, pointing out the maxim “culture eats strategy for breakfast.” One leader basically said they didn’t believe in culture impacting the business; you do what they say, as in “my way or the highway!” Not sure that this was a digital highway.

So, what do we need to do to start digital transformation from a people aspect? Firstly, look in the mirror.

Change for people occurs at every level of the organization. It requires people to understand what they are going to do differently. How will they behave, work with others, develop, and grow themselves and others in the new digital business?

Some of the human factors that need to be incorporated into the culture of a new digital organization are:

Trust, Humility, Collaboration, Teams, Self-Learning, Transparency, Risk, Listening

Digital companies of the future will manifest these soft skills in the operating model or fabric of the business. Heroes will be few and far between.

There is no quick fix to culture; this is a long game. We can migrate System A to System B, and we can shift our process from Waterfall to Agile; these things only require money, skills and technology to implement.  But to be truly digital is to move people to thinking digital in every action they do, and this is hard. It impacts beliefs, behaviours, historic cultures and current impeding actions.

Digital people

A good starting point is to surround yourself with digital people who inspire you to think and operate differently, exhibiting the skills above and impacting decision making in the business. Don’t convert them; let them convert you. Don’t just wake up on Monday saying you are digital; you need to go through your own transformation.

Test areas of changes within your organization. Create groups of people and programs of work that are empowered and enabled to work differently, driving towards a common strategy.

At some point, when you have enough evidence, you will need to take the findings of the test programs and roll the changes into the operating model of the business. This should include human resources changes, performance measurement methods, new ways of recruiting and improving key skills, strong teaming structures and rewards for business outcomes.

Digital transformation questions you should be asking

A successful digital transformation program requires the biggest change in people, from the top down, before changing technology and processes. Ensure you understand the culture and characteristics of the people needed in your organization to support the transformation and ongoing execution.

Below are questions you should be asking to assess how digital you are. Consider what each question is asking and why. I hope these questions help you think about the changes you need to make for a successful digital transformation.

The Digital Person Questionnaire

1: Do you listen to others before you make your decision?


2: Do you obtain information from many others inside and outside your ecosystem?


3: Do you trust the people executing your tasks, are you macro or micro managing?


4: Do you or your team collaborate on content and decisions?


5: Do you learn and develop new skills?


6: Do you help others learn and develop new skills?


7: Do you accept failure from others, to help understand what went wrong?


8: Do you work in isolation or with a team?


9: Do you create a personal relationship or connection with your staff, team and colleagues?


10: Do you consider yourself to be humble — do you demonstrate humility?


11: Do you lead through influence or is it more “my way or the highway”?


12: Do you embrace change?


13: Do you try to fix the big problem in one go, or break it down in stages?


14: Do you focus on your own success or the success of the team?


15: Do you recruit or surround yourself with clever people?


Let the Kids Define our Technology Roadmap

Let the Kids Define our Technology Roadmap

2010 CeBIT Technology FairSo I think we all recognise that technology is evolving at an exponential rate, ten years ago we could see and track innovation in years, the new Nokia phone, or 15K Hard drives would have been anticipated for months, and you could prepare yourself for it well in advance. If you are a technologist in a business you had time to warm up the CFO or budget holder, you could work across the stakeholders to get them onside and by the time the product was launched you had the whole company champing at the bit. However since this took a long time other technologies had come out which either impacted the anticipated performance boost or features, or you had an incompatibility issue, which meant you had to now make another choice.

We have a problem today in that new products or new features are being released in much shorter timeframes, therefore you don’t have time to work the stakeholders in time, and if you did the product or feature you are promoting will be out of date by the time you got it agreed and deployed. And its great to a 5 year plan, but really who can currently forecast what technologies are going to be hot in 5yrs, Sure you throw the biggies at it Cloud, Big Data, Mobility but what does that really mean.

How many times have you heard this “We should use cloud to enable our business, and analyses the data on the cloud, then publish the results to a mobile enabled workforce”, Insightful statement, reality is our kids are using cloud on their smartphones and see dashboards on their likes and status already, of course business will adopt these things.

So lets think about that, what does a typical teenager today expect from their IT experience:

  • Being online, all the time through multiple devices
  • Access to Apps Stores for application purchase and provisioning instantly
  • Collaborative working, data can be shared easily, securely
  • Work from anywhere, consistant working experience
  • Multitask / application integration
  • Everything on a cloud

I would imagine if you looked at most CIO/CTO and CEO’s Strategy this would be a comparable list, maybe more business jargon thrown in just to help maintain the illusion but the reality is the younger generations are already doing this and more today,  by the way those teenagers in five, ten fifteen years time are going to expecting these things to be in-place when they enter the workplace.

An example oft this was recently demonstrated in our home, I tried to explain to my daughter that the cloud was actually a physical building somewhere, with servers, storage and networks which were hosting and providing the apps and data she uses on her phone and tablet. She looked at me, and said, “I know” and pointed me to the `Wikipedia App` on her smartphone. It was at this point I realised that daddy was no long the fountain of all knowledge.

So what do we need to do, well first thing watch the younger generation and how they operate with IT, they are not fused by the actual devices, ok I might bow down to the Apple brand a bit on that one, but mainly they want the user-experience to be easy, flexible and in real-time. If you wish to experiment with your own kids, remove a device, then remove two, and see how they adapt. If you wish to be really cruel shutdown the WIFI in your house, this may have two effects, one they do not talk to you for a few hours, or they find another WIFI AccessPoint outside your control and reconnect, note they still may not talk to you as well. It most cases the kids will continue to work and operate, maybe in a new location on a different device, but they are working.

So next time you develop a roadmap, or think about the technology strategy for your business, consider spending time looking at how the next generation of employees would wish to work and operate, it may help thinking about what direction your business needs to take.

Next Generation IT: Considerations and Conclusion Part 6 of 6

Other Considerations

Although we have covered the four main areas of Next Generation IT solutions, there are three other key elements to consider: Open Source, Mobility and As a Service. Let’s tackle each of these to help complete the picture.

In the past, Open Source was viewed as something that looked interesting, but when it came to mainstream production, commercial applications won the battle in terms of support and ownership. Since the commercial products came with support and maintenance contracts, we had the proverbial “throat to choke” if something went wrong. However, the open source world started to make its mark over 12 years ago with the introduction of the Linux operating system. Although it took 4-5 years to establish, we would now consider Linux as one of the preferred operating systems for data centres and application hosting. If we leap frog to today, we are seeing a range of open source products hitting the market and being considered for key production-based workloads. One of the main areas is around Big Data and the introduction of Hadoop, developed out of Yahoo and matured through the Internet Service Providers. This open source product has revolutionised business analytics. With open source products like Hadoop, the risk of feeling disconnected from the developer / product owner or having no real support framework is now mitigated by vendors providing third-party consulting support for your implementations. So you have the look and feel of a commercial product with the flexibility and resources of a crowdsource-developed open source product.

Mobility is a key feature today for any application. How you push data and information to your workforce is critical to productivity. At the same time, collecting data from the mobile workforce is beneficial to business operations. Enabling people to have access to systems securely and quickly means that the workforce can always be online and can operate at, or close to, 100% of their in-office productivity. As business applications enable the mobile workforce to access sales data, ERP and CRM systems, we should also consider pushing information about system operations and threat analysis so that events can be handled pro-actively versus reactively.

As a Service can be nicely aligned to cloud delivery models. The reason for raising the As a Service factor is its valuable purchase model and innovation potential. Traditional purchase models are good for businesses with large Capex budgets, but these are few and far between nowadays. And even the Capex-rich business are still looking to spend wisely and have a more predictable commercial model. The As a Service model allows businesses to buy defined services at an agreed-upon unit rate, charged on a consumption or allocation basis, typically with minimum volume and time commitments. Once over the minimum levels, organisations can flex up and down their usage to meet the peaks and troughs of the business. An example is retailers who need to increase their online ordering systems during the holiday season. With a traditional Capex model the retail organisation would have to purchase IT systems to handle the highest utilisation rate; therefore, during quiet times (i.e., non-holiday seasons) the systems would be underutilised. The As a Service model frees up funds for the organisation to spend on new, innovative solutions that drive the business forward rather than maintain the status quo.

Conclusion: Hybrid Stacks and The Art of the Possible         

There is no single solution stack that will address all your needs. Understanding this will allow you to think about each layer and what is needed to provide the right hosting platform, the right security and management services, and the right application delivery and development frameworks to meet the needs of the business process or question you are trying to resolve. The fact is, you will have a hybrid solution stack that combines public and private cloud solutions. Where possible, migrating to new, more agile platforms will provide future proofing and enable easier integration with other solutions. This makes good business sense, as every business must focus on maximizing the value of its applications and data, whether held internally or externally.

We started out by asking: What is Next Generation IT? Next Generation IT may be the latest buzz word, but what is new today is old tomorrow. That said, we can define Next Generation IT by focusing on some key areas:

  • The adoption of Cloud technologies and services is pivotal to Next Generation IT, whether for infrastructure, platform or business application services.
  • Cyber Security is always a threat. Ensure that the solutions and services you buy or build provide adequate levels of protection for your business and your clients.
  • To help businesses make better decisions, the ability to mine and query a wide variety of Big Data is critical to achieving better insight into business operations and direction.
  • Mobility should be a consideration across your application landscape, enabling the workforce and client base to operate from any location and feel connected to the business. This is essential in today’s world.
  • In order to achieve these business gains, enterprises must move forward with Application Modernisation, which should be treated as a driver of business change.

When taking on this journey, work with system integrators and service providers who can work with confidence across public and private cloud services, are able to operate from the Business Process layer to the Infrastructure layer, and can consider the service management and security wrappers that are needed. As open source products mature, consider them as a way to avoid vendor lock-in, which is key to having a more flexible and agile future. Above all, talk to your business not about the restrictions of legacy ball-and-chain infrastructure but about the art of the possible with Next Generation IT solutions.

Link to Part 1

Next Generation IT: Application Part 3 of 6


We could simply break down the Application layer into industry-specific and cross-industry applications and be done with it. However, we need to ensure that applications support Big Data, Business Continuity and Mobility. This includes common APIs and protocols for easy integration. For this, consider the data and its relevance to the business.

One of the key challenges in the Application layer is that you may end up with application sprawl, and, depending on the size of the organisation and how long it has been operating, there is a likely chance you will have multiple applications performing similar if not duplicate tasks. This happens in large global organisations and presents big challenges to CIOs and CTOs who are trying to both consolidate applications and create a unified organisation.

Taking stock of your entire application landscape is key. It is typically not easy to retire applications, as you normally find that one or two business units depend on them and their productivity would stop. Just introducing new applications and asking people to start using them perpetuates the application sprawl; you end up with new and old apps, and data integration becomes people copying and pasting data from one application to the other. This is hardly productive, causes problems with data accuracy and consistency, and is a burden on the employees.

As you review your application landscape, the key concept to understand for Next Generation solutions is Application Modernisation, the practice of taking you legacy application estate and transitioning it to a new application and new platform, or upgrading to the latest versions to provide the features and functionality that business are expecting. The move could be small or large depending on your starting point and the end state you want to achieve. Many enterprises are looking to cloudify their apps, giving them a new platform and commercial framework.

However, we can now start to consider some of the various delivery mechanisms that can help us be more agile and improve our time to market. Let’s start with Cloud Apps, a key enabler in the Next Generation solution set. Although we typically think of Amazon and Google, there are many vendors and products in the enterprise cloud application space. Look at the success of cloud applications like Salesforce; five years ago we would have run for the hills at the thought of hosting our sales and other proprietary data on a public cloud.

A key focus now for CIOs and CTOs is how to migrate their legacy apps to the new cloud-enabled solutions. This can be an expensive but valuable exercise, as we see the maturity and coverage of cloud applications becoming the norm for a majority of businesses. This will provide a good stepping stone for future-proofing your estate and taking advantage of the new development and delivery processes like DevOps, which enable rapid development and roll-out of applications and code updates in a seamless and risk-free approach, making change the norm verses the exception. Anyone who uses Google or Amazon apps today knows that updates to their applications are rolled out without incident, new features or bug fixes are deployed continuously during the day, and no Change Control ticket or Service Outage notice is created. CIOs and CTOs want their business applications to inherit these principles that are rooted in the consumer space.

Link to Part 4 Platform

Next Generation IT: Platform Part 4 of 6


As we move down the stack from the front-end applications and business process aspects of an IT solution, we dive into back office IT. This is the enabling area of the IT estate that underpins the business applications and provides key support services to the solution. Without back office IT we would not be able to operate.

The Platform layer has many dimensions, and although certain features and support services span the entire stack, we need a launch pad for them. The Platform layer is a good home. If we consider that platforms are a combination of products (excluding the core business applications) that provide a framework on which applications can be delivered and supported, we can examine a combination of platform solutions that include DevOps, Big Data Platforms, Application Hosting, Virtual Desktop and Mobility.

We can also examine some of the supporting platforms that typically operate up and down the stack. These are critical to ensure the integrated solution is able to operate in the business. One of the critical areas is Service Management, in the form of Operational and Business Support Systems (OSS/BSS). You need to be able to report on the operation of the overall IT solution, alert on issues, capture problems before they arise, and ensure that the business is receiving the service performance and stability that is required, aligned to its business SLAs.

Another critical area is Security Systems, which can get more complicated with regulatory compliance laws. However, the fundamentals for antivirus, identity management, audit logging and firewalls are the default must-haves, and these need to integrate up and down the stack. As we move applications to the cloud, be it public or private, we must ensure that we secure the data, data transport, and end-user interaction with the data at all times. The cloud infers that physical infrastructure is shared in some manner, either with other customers, business units or applications. Therefore, security needs to be integrated much deeper into the Platform and Application layers than the traditional client-server solutions of the past, which could lock themselves behind physical ring-fenced architecture and firewalls. Today’s security platforms are software enabled, embedding themselves into the applications and platforms to provide much more granular control.

Although some might put Orchestration under the Service Management umbrella, let’s pull it out as a separate area, as it is very important in the new integrated solution architecture to consider how we make the deployment, management and configuration of any IT solution easier, and reduce the pressure on the typically depleted and over-run IT department. Cloud providers could not operate if they did not employ automation within their IT estate. Imagine the change requests coming into Amazon every day, or even every hour. Your workforce and systems could not cope. The fact that you can’t quickly stand up systems, make adjustments and react to business change is not a technology issue; it is a resource and process issue.

Today the technology in workflow management and orchestration tools are designed to orchestrate and manipulate products through common APIs and protocols. This lets you request, deploy and manage complex environments with minimum effort. This may scare some people, but your business is looking at the competition and trying to move quickly. Your IT estate has to do the same thing. Otherwise your business’s time to market will be impacted and revenue lost. I think we all know what typically happens next.

As we look across our IT estate, we need to ensure that the orchestration tools focus not only on provisioning an application but also deploying the service management tools and agents, configuring the security polices, and setting up connections to other systems and end user applications. So can we expect orchestration to configure and commission 100% of our IT estate? Not today, but we should be moving towards 70-80% of the estate, with the remaining configuration being custom tweaks and tunes required by the application that are too variable to automate.

Platforms will always be the connection between applications and infrastructure. In many ways Platform is the most important layer in the IT estate, because it not only provides the home for the business applications but is the enabler for Service Management, Security and Orchestration. As mentioned in the Application section, the move to the cloud is a key driver, and Cloud Platforms are fundamental to the success of moving applications to the cloud, or providing supporting services with easy integration.

Link to Part 5 Infrastructure

Next Generation IT: Infrastructure Part 5 of 6


Infrastructure is the concrete physical foundation of any IT service. Don’t be fooled by the word “cloud.” Behind every cloud is a data centre with servers, storage devices and network gear. In the past we would take clients around data centres and show off shiny boxes and flashing LEDs. A lot of hardware vendors even made style design decisions about how sexy their product looked. Today you are less likely to walk around a data centre. Google and Amazon are good examples of cloud providers who would rather not discuss their infrastructure or data centres, though they have invested hundreds of millions of dollars to provide a global data centre footprint. So easy, right? Build one big data centre (two if you want redundancy), put all your applications and data into it, and job done.

Unfortunately, it’s not that easy; a combination of data regulations, regional restrictions and speed of access are some of the key considerations. This is why you see the cloud providers standing up more cloud data centres across the globe to handle these requirements. Your business may well be in the same position, and therefore you will end up with a dispersed infrastructure footprint.

Acknowledging that we need good infrastructure, what are the key considerations overall? Is it about the success of securely leveraging the resources you have? Let’s start with the data centre itself. This is a major investment, and running and maintaining these facilities must be considered. Data centres are key resources to be leveraged. Using physical segregated infrastructure within the facility can provide added security, ensuring there is no chance of data bleed between applications, business units or clients. If you have disaster recovery services, you typically need another data centre, suitably connected and managed.

Much focus in the data centre is on network connectivity. In today’s connected world, data no longer needs to flow just within the traditional Intranet networks. In fact, Intranet is becoming a thing of the past. Now we are simply connected; we need to connect to applications and data sources from both internal and external locations. The network should support secure and resilient connections with integrated secure VPNs, firewalls, intrusion detection systems and high availability configurations to ensure services are available in the event of an issue or outage.

In terms of compute and storage solutions, there has been a move from traditional server and storage infrastructure — a “build-it-yourself” mind-set — to the new converged infrastructure, which has pre-packaged server, storage and network products in an integrated stack with predefined sizing and growth options. This can be an accelerator, as these converged infrastructures are pretty much ready to go and can be deployed like an appliance, versus the traditional months of negotiating with vendors, arguing the values of preferred products, and delay in knitting all this together in the data centre. So with converged infrastructures, job done.

Well, not quite. The issue with converged infrastructures is that they come at a price. Typically the products used are enterprise grade, designed not to fail, and have the support and backing of major vendors. Today, these elements are being challenged by the application space, with new apps that are self-healing and able to operate at web scale. Therefore, all the resilience and high-end products in the Infrastructure layer are just adding cost to the architecture. Hadoop is a prime example of an open source product that is designed to be built on commodity hardware products; if a server fails, you throw it away and put in a new one. The cluster reforms and off you go again. As we look at email and other business applications, there are more of these cluster-based solutions that are challenging the infrastructure to keep up and meet the cost-to-function needs of the business.

If we also consider that not all applications need to be hosted in your own data centre, you start getting into hybrid solutions. Although you may wish to host your critical production applications and data in your controlled facilities, this typically means that those parts of your business get caught up in the change control and restrictions typically imposed in the data centre. However, less sensitive environments like development and testing can be hosted outside your facilities. If you use cloud-based services, you can typically improve response times and decrease your time to market.

Link to Part 6 Other Considerations and Conclusion

Next Generation IT: Business Process Part 2 of 6

Business Process

It does not matter if the process is an industry vertical business process or a cross-industry business process. Either way, the process is the output — the deliverable — that will define success or failure for any business. All the other layers are merely enablers to get you to this point.

At this layer we consider asset management, supply chain / order processing, marketing, financial management, customer relationship management and knowledge management, to name a few processes. With unique industry and business requirements, we can see how the Business Process layer requires many variables and strong industry expertise. So what about Next Generation solutions? There are well-established application products that support these processes, but consider the extra value pieces like analytics. In most cases today the Business Process layer is supported by legacy business intelligence reporting that tells you what has been rather than what could be. This comes from the data locked inside internal systems. What about all the data you know has value and insights but are not able to access?

The first aspect of Next Generation solutions is the addition / supportability of Big Data Analytics, recognizing there is more data that can be analysed to better answer critical business questions and help solidify business decisions. Although the execution of these analytic queries will be done in the Application and Platform layers, the query itself is based on key performance indicators and metrics that require deep business knowledge and the ability to translate this knowledge into an executable hypothesis.


If we think about other aspects of a solution that is defined in, and directly dependent on, the Business Process layer but is executed in the other layers, then Business Continuity Planning must be a factor. This is about aligning the response of the technology, people and process to the impact of an outage or disaster that cripples the business. This comes down to knowing the acceptable Recovery Time Objective (RTO) for the critical business applications. For example, a patient record system for a health provider can only accept a very small RTO; the technology, people and process have to be designed to handle this, so the solution is generally not cheap. However, a small manufacturing business can tolerate a less aggressive RTO. Although it needs to get production back up and running, the business can use alternate technologies, reducing the cost and enabling the business to meet its customer SLAs.

Another aspect of the Business Process layer to focus on is enabling people to work better with the business applications as well as collaborate and share information and knowledge. Mobility (discussed later) should be considered to empower the workforce, drive the business forward and respond quickly to changing business situations.

Link to Part 3 Application

Next Generation IT: What is it, and How Do I Do It – Part 1 of 6

Next Generation IT: What Is It, and How Do I Do It?


What does “Next Generation IT” really mean? This paper takes a holistic view of the various layers and components that comprise Next Generation IT and guides CIOs and CTOs on what to look for in modernising their applications and creating Next Generation IT solutions. While the layers and components are important in and of themselves, the real value comes from integrating all the pieces without technology bias.

Keywords: Big Data, Business Continuity, Mobility, DevOps, Service Management, Security, Orchestration, Open Source, As a Service, Modernisation, Business Process, Application, Platform, Infrastructure


What is Next Generation IT?

Next Generation IT solutions are becoming buzz words in the IT arena. Your business has to be considering Next Generation or you going to be Old Generation and that will never do. But what is Next Generation IT and how does it fit into my existing legacy IT estate? I can’t just rip and replace everything I have installed over the last 15 years; my CFO will have a heart attack! But every time my CEO meets with analysts or our vendor partners, all are energised to improve time to market, reduce cost, or improve operational efficiency by simply deploying Next Generation IT.

So what is Next Generation IT? What does it consist of, how it can it be placed within your IT estate, and how will your business benefit? This paper addresses these questions, targeting CTOs and CIOs who are considering adding or renewing part or all of their IT estate. This paper focuses on issues to consider to help your business move towards a more agile, scalable and future-proof IT estate.

Start with the Stack

StackThis stack diagram is not new; it forms the foundation for where we place and consider technologies and solutions. Before we get into detail, let’s align some terminology. I like using the familiar Lego bricks example. Think of a single technology product as an individual block — i.e., server, network switch, ERP app, etc. A solution is the combination of technologies integrated together to solve a business need. Solutions can only work if the pieces integrate successfully. Lego bricks link together because they have standard interfaces (you cannot link a Lego brick and Duplo brick together.). All Lego bricks can link together to create complex designs. If we go back to our four layer stack, we can consider both technology and solutions that fit into each layer. The focus of this paper is on the solutions versus the technology, as these are a key aspect of Next Generation IT.

Let’s start from the top and outline what should be considered when thinking about Next Generations solutions.

Link to Part 2 Business Process

Connecting the Boxes


As we develop IT solutions, it is very easy to focus on the core elements: infrastructure, platform and application layers, and the big components such as storage and compute, ERP and middleware technologies. However, as we think about architectures and systems integration, focusing on the connectivity of the data and application is critical to a successful deployment and to satisfying both operational and regulatory requirements.

This focus on connectivity is particularly important as we move to modern, cloud-based applications. In today’s architecture we worry less about the basic interoperability of big components because the vendors typically have this well covered. Unless you’re trying to put the proverbial square peg in a round hole, your risk is low. As we look to make our applications more agile and consider moving workload from public cloud to private cloud or hosted solutions, and as we think about moving from testing to production, what we need to worry about more is the connection between data and applications. Is the line that connects these boxes well designed for today and tomorrow?

Consider the plumbing in your house. Would one type of pipe and fittings handle high and low pressure water, gas and oil-based systems? Fittings and pipe structure need to be designed specifically to ensure they integrate and operate with the appliances they connect. Now consider an IT architecture. Don’t confuse the lines that connect the boxes as being the network cables or network connection protocols. The OSI model handles these connections up to layer 4, typically in the infrastructure layer. The layers I want to focus on are those that deal with the data transportation between applications (layers 5-7), where the lines between the boxes are the protocols and APIs that connect the applications together.

These connections need to not only function as interconnections between applications but also take on the attributes of the overall solution. For example, if you are operating in a secure regulated environment, you must ensure you are using secure protocols (e.g., SSL, SFTP, HTTPS, SSH), making sure that data is encrypted as it moves between applications. Or if writing APIs, then Java with the Java Cryptography Extension (JCE) can be used to secure the data connections through encryption.

As part of the design when considering APIs and protocols, strive to future proof yourself. As we have seen in the Web space, RESTful APIs have become the protocol of choice. Risk is reduced around application integration, availability of skilled resources and support from the application vendors, providing flexibility and adaptability for future developments.

Consider a client looking to migrate from their legacy applications to new modern apps, migrating both platform and hosting to cloud-enabled solutions. A critical aspect is ensuring that connectivity for the migration itself and then the migrated operational components is enabled. Part of the success of most application modernisations projects is based on the ability to move and reconnect new applications and data sources into the legacy estates.

As we look forward we are already seeing products, both commercial and open source, that help solution designers interconnect their applications through common data connectors and APIs.  One to draw your attention to in the open source space is EzBake, developed by 42Six Solutions (a CSC company). EzBake is in the final stage of being launched. This open source project aims to simplify the data connectivity, federated query and security elements within the big data space.  There are already public cloud-based platforms that enable you to buy a service that connects your data source to a target through a common set of APIs and protocols. EzBake will likely sit in the private cloud space, focused on connecting big data applications and data stores, but the ability to make these application and data connections easily is usable across the IT landscape.

It all comes down to the line connecting the boxes. Ensuring that this is given as much thought and consideration as the data and applications when designing a solution will pay dividends, enabling your architecture to integrate and operate successfully. And with correctly chosen protocols, your solution will be future proofed for the next integration or migration project.