Let the Kids Define our Technology Roadmap

Let the Kids Define our Technology Roadmap

2010 CeBIT Technology FairSo I think we all recognise that technology is evolving at an exponential rate, ten years ago we could see and track innovation in years, the new Nokia phone, or 15K Hard drives would have been anticipated for months, and you could prepare yourself for it well in advance. If you are a technologist in a business you had time to warm up the CFO or budget holder, you could work across the stakeholders to get them onside and by the time the product was launched you had the whole company champing at the bit. However since this took a long time other technologies had come out which either impacted the anticipated performance boost or features, or you had an incompatibility issue, which meant you had to now make another choice.

We have a problem today in that new products or new features are being released in much shorter timeframes, therefore you don’t have time to work the stakeholders in time, and if you did the product or feature you are promoting will be out of date by the time you got it agreed and deployed. And its great to a 5 year plan, but really who can currently forecast what technologies are going to be hot in 5yrs, Sure you throw the biggies at it Cloud, Big Data, Mobility but what does that really mean.

How many times have you heard this “We should use cloud to enable our business, and analyses the data on the cloud, then publish the results to a mobile enabled workforce”, Insightful statement, reality is our kids are using cloud on their smartphones and see dashboards on their likes and status already, of course business will adopt these things.

So lets think about that, what does a typical teenager today expect from their IT experience:

  • Being online, all the time through multiple devices
  • Access to Apps Stores for application purchase and provisioning instantly
  • Collaborative working, data can be shared easily, securely
  • Work from anywhere, consistant working experience
  • Multitask / application integration
  • Everything on a cloud

I would imagine if you looked at most CIO/CTO and CEO’s Strategy this would be a comparable list, maybe more business jargon thrown in just to help maintain the illusion but the reality is the younger generations are already doing this and more today,  by the way those teenagers in five, ten fifteen years time are going to expecting these things to be in-place when they enter the workplace.

An example oft this was recently demonstrated in our home, I tried to explain to my daughter that the cloud was actually a physical building somewhere, with servers, storage and networks which were hosting and providing the apps and data she uses on her phone and tablet. She looked at me, and said, “I know” and pointed me to the `Wikipedia App` on her smartphone. It was at this point I realised that daddy was no long the fountain of all knowledge.

So what do we need to do, well first thing watch the younger generation and how they operate with IT, they are not fused by the actual devices, ok I might bow down to the Apple brand a bit on that one, but mainly they want the user-experience to be easy, flexible and in real-time. If you wish to experiment with your own kids, remove a device, then remove two, and see how they adapt. If you wish to be really cruel shutdown the WIFI in your house, this may have two effects, one they do not talk to you for a few hours, or they find another WIFI AccessPoint outside your control and reconnect, note they still may not talk to you as well. It most cases the kids will continue to work and operate, maybe in a new location on a different device, but they are working.

So next time you develop a roadmap, or think about the technology strategy for your business, consider spending time looking at how the next generation of employees would wish to work and operate, it may help thinking about what direction your business needs to take.

Next Generation IT: Considerations and Conclusion Part 6 of 6

Other Considerations

Although we have covered the four main areas of Next Generation IT solutions, there are three other key elements to consider: Open Source, Mobility and As a Service. Let’s tackle each of these to help complete the picture.

In the past, Open Source was viewed as something that looked interesting, but when it came to mainstream production, commercial applications won the battle in terms of support and ownership. Since the commercial products came with support and maintenance contracts, we had the proverbial “throat to choke” if something went wrong. However, the open source world started to make its mark over 12 years ago with the introduction of the Linux operating system. Although it took 4-5 years to establish, we would now consider Linux as one of the preferred operating systems for data centres and application hosting. If we leap frog to today, we are seeing a range of open source products hitting the market and being considered for key production-based workloads. One of the main areas is around Big Data and the introduction of Hadoop, developed out of Yahoo and matured through the Internet Service Providers. This open source product has revolutionised business analytics. With open source products like Hadoop, the risk of feeling disconnected from the developer / product owner or having no real support framework is now mitigated by vendors providing third-party consulting support for your implementations. So you have the look and feel of a commercial product with the flexibility and resources of a crowdsource-developed open source product.

Mobility is a key feature today for any application. How you push data and information to your workforce is critical to productivity. At the same time, collecting data from the mobile workforce is beneficial to business operations. Enabling people to have access to systems securely and quickly means that the workforce can always be online and can operate at, or close to, 100% of their in-office productivity. As business applications enable the mobile workforce to access sales data, ERP and CRM systems, we should also consider pushing information about system operations and threat analysis so that events can be handled pro-actively versus reactively.

As a Service can be nicely aligned to cloud delivery models. The reason for raising the As a Service factor is its valuable purchase model and innovation potential. Traditional purchase models are good for businesses with large Capex budgets, but these are few and far between nowadays. And even the Capex-rich business are still looking to spend wisely and have a more predictable commercial model. The As a Service model allows businesses to buy defined services at an agreed-upon unit rate, charged on a consumption or allocation basis, typically with minimum volume and time commitments. Once over the minimum levels, organisations can flex up and down their usage to meet the peaks and troughs of the business. An example is retailers who need to increase their online ordering systems during the holiday season. With a traditional Capex model the retail organisation would have to purchase IT systems to handle the highest utilisation rate; therefore, during quiet times (i.e., non-holiday seasons) the systems would be underutilised. The As a Service model frees up funds for the organisation to spend on new, innovative solutions that drive the business forward rather than maintain the status quo.

Conclusion: Hybrid Stacks and The Art of the Possible         

There is no single solution stack that will address all your needs. Understanding this will allow you to think about each layer and what is needed to provide the right hosting platform, the right security and management services, and the right application delivery and development frameworks to meet the needs of the business process or question you are trying to resolve. The fact is, you will have a hybrid solution stack that combines public and private cloud solutions. Where possible, migrating to new, more agile platforms will provide future proofing and enable easier integration with other solutions. This makes good business sense, as every business must focus on maximizing the value of its applications and data, whether held internally or externally.

We started out by asking: What is Next Generation IT? Next Generation IT may be the latest buzz word, but what is new today is old tomorrow. That said, we can define Next Generation IT by focusing on some key areas:

  • The adoption of Cloud technologies and services is pivotal to Next Generation IT, whether for infrastructure, platform or business application services.
  • Cyber Security is always a threat. Ensure that the solutions and services you buy or build provide adequate levels of protection for your business and your clients.
  • To help businesses make better decisions, the ability to mine and query a wide variety of Big Data is critical to achieving better insight into business operations and direction.
  • Mobility should be a consideration across your application landscape, enabling the workforce and client base to operate from any location and feel connected to the business. This is essential in today’s world.
  • In order to achieve these business gains, enterprises must move forward with Application Modernisation, which should be treated as a driver of business change.

When taking on this journey, work with system integrators and service providers who can work with confidence across public and private cloud services, are able to operate from the Business Process layer to the Infrastructure layer, and can consider the service management and security wrappers that are needed. As open source products mature, consider them as a way to avoid vendor lock-in, which is key to having a more flexible and agile future. Above all, talk to your business not about the restrictions of legacy ball-and-chain infrastructure but about the art of the possible with Next Generation IT solutions.

Link to Part 1

Next Generation IT: Application Part 3 of 6


We could simply break down the Application layer into industry-specific and cross-industry applications and be done with it. However, we need to ensure that applications support Big Data, Business Continuity and Mobility. This includes common APIs and protocols for easy integration. For this, consider the data and its relevance to the business.

One of the key challenges in the Application layer is that you may end up with application sprawl, and, depending on the size of the organisation and how long it has been operating, there is a likely chance you will have multiple applications performing similar if not duplicate tasks. This happens in large global organisations and presents big challenges to CIOs and CTOs who are trying to both consolidate applications and create a unified organisation.

Taking stock of your entire application landscape is key. It is typically not easy to retire applications, as you normally find that one or two business units depend on them and their productivity would stop. Just introducing new applications and asking people to start using them perpetuates the application sprawl; you end up with new and old apps, and data integration becomes people copying and pasting data from one application to the other. This is hardly productive, causes problems with data accuracy and consistency, and is a burden on the employees.

As you review your application landscape, the key concept to understand for Next Generation solutions is Application Modernisation, the practice of taking you legacy application estate and transitioning it to a new application and new platform, or upgrading to the latest versions to provide the features and functionality that business are expecting. The move could be small or large depending on your starting point and the end state you want to achieve. Many enterprises are looking to cloudify their apps, giving them a new platform and commercial framework.

However, we can now start to consider some of the various delivery mechanisms that can help us be more agile and improve our time to market. Let’s start with Cloud Apps, a key enabler in the Next Generation solution set. Although we typically think of Amazon and Google, there are many vendors and products in the enterprise cloud application space. Look at the success of cloud applications like Salesforce; five years ago we would have run for the hills at the thought of hosting our sales and other proprietary data on a public cloud.

A key focus now for CIOs and CTOs is how to migrate their legacy apps to the new cloud-enabled solutions. This can be an expensive but valuable exercise, as we see the maturity and coverage of cloud applications becoming the norm for a majority of businesses. This will provide a good stepping stone for future-proofing your estate and taking advantage of the new development and delivery processes like DevOps, which enable rapid development and roll-out of applications and code updates in a seamless and risk-free approach, making change the norm verses the exception. Anyone who uses Google or Amazon apps today knows that updates to their applications are rolled out without incident, new features or bug fixes are deployed continuously during the day, and no Change Control ticket or Service Outage notice is created. CIOs and CTOs want their business applications to inherit these principles that are rooted in the consumer space.

Link to Part 4 Platform

Next Generation IT: Platform Part 4 of 6


As we move down the stack from the front-end applications and business process aspects of an IT solution, we dive into back office IT. This is the enabling area of the IT estate that underpins the business applications and provides key support services to the solution. Without back office IT we would not be able to operate.

The Platform layer has many dimensions, and although certain features and support services span the entire stack, we need a launch pad for them. The Platform layer is a good home. If we consider that platforms are a combination of products (excluding the core business applications) that provide a framework on which applications can be delivered and supported, we can examine a combination of platform solutions that include DevOps, Big Data Platforms, Application Hosting, Virtual Desktop and Mobility.

We can also examine some of the supporting platforms that typically operate up and down the stack. These are critical to ensure the integrated solution is able to operate in the business. One of the critical areas is Service Management, in the form of Operational and Business Support Systems (OSS/BSS). You need to be able to report on the operation of the overall IT solution, alert on issues, capture problems before they arise, and ensure that the business is receiving the service performance and stability that is required, aligned to its business SLAs.

Another critical area is Security Systems, which can get more complicated with regulatory compliance laws. However, the fundamentals for antivirus, identity management, audit logging and firewalls are the default must-haves, and these need to integrate up and down the stack. As we move applications to the cloud, be it public or private, we must ensure that we secure the data, data transport, and end-user interaction with the data at all times. The cloud infers that physical infrastructure is shared in some manner, either with other customers, business units or applications. Therefore, security needs to be integrated much deeper into the Platform and Application layers than the traditional client-server solutions of the past, which could lock themselves behind physical ring-fenced architecture and firewalls. Today’s security platforms are software enabled, embedding themselves into the applications and platforms to provide much more granular control.

Although some might put Orchestration under the Service Management umbrella, let’s pull it out as a separate area, as it is very important in the new integrated solution architecture to consider how we make the deployment, management and configuration of any IT solution easier, and reduce the pressure on the typically depleted and over-run IT department. Cloud providers could not operate if they did not employ automation within their IT estate. Imagine the change requests coming into Amazon every day, or even every hour. Your workforce and systems could not cope. The fact that you can’t quickly stand up systems, make adjustments and react to business change is not a technology issue; it is a resource and process issue.

Today the technology in workflow management and orchestration tools are designed to orchestrate and manipulate products through common APIs and protocols. This lets you request, deploy and manage complex environments with minimum effort. This may scare some people, but your business is looking at the competition and trying to move quickly. Your IT estate has to do the same thing. Otherwise your business’s time to market will be impacted and revenue lost. I think we all know what typically happens next.

As we look across our IT estate, we need to ensure that the orchestration tools focus not only on provisioning an application but also deploying the service management tools and agents, configuring the security polices, and setting up connections to other systems and end user applications. So can we expect orchestration to configure and commission 100% of our IT estate? Not today, but we should be moving towards 70-80% of the estate, with the remaining configuration being custom tweaks and tunes required by the application that are too variable to automate.

Platforms will always be the connection between applications and infrastructure. In many ways Platform is the most important layer in the IT estate, because it not only provides the home for the business applications but is the enabler for Service Management, Security and Orchestration. As mentioned in the Application section, the move to the cloud is a key driver, and Cloud Platforms are fundamental to the success of moving applications to the cloud, or providing supporting services with easy integration.

Link to Part 5 Infrastructure

Next Generation IT: Infrastructure Part 5 of 6


Infrastructure is the concrete physical foundation of any IT service. Don’t be fooled by the word “cloud.” Behind every cloud is a data centre with servers, storage devices and network gear. In the past we would take clients around data centres and show off shiny boxes and flashing LEDs. A lot of hardware vendors even made style design decisions about how sexy their product looked. Today you are less likely to walk around a data centre. Google and Amazon are good examples of cloud providers who would rather not discuss their infrastructure or data centres, though they have invested hundreds of millions of dollars to provide a global data centre footprint. So easy, right? Build one big data centre (two if you want redundancy), put all your applications and data into it, and job done.

Unfortunately, it’s not that easy; a combination of data regulations, regional restrictions and speed of access are some of the key considerations. This is why you see the cloud providers standing up more cloud data centres across the globe to handle these requirements. Your business may well be in the same position, and therefore you will end up with a dispersed infrastructure footprint.

Acknowledging that we need good infrastructure, what are the key considerations overall? Is it about the success of securely leveraging the resources you have? Let’s start with the data centre itself. This is a major investment, and running and maintaining these facilities must be considered. Data centres are key resources to be leveraged. Using physical segregated infrastructure within the facility can provide added security, ensuring there is no chance of data bleed between applications, business units or clients. If you have disaster recovery services, you typically need another data centre, suitably connected and managed.

Much focus in the data centre is on network connectivity. In today’s connected world, data no longer needs to flow just within the traditional Intranet networks. In fact, Intranet is becoming a thing of the past. Now we are simply connected; we need to connect to applications and data sources from both internal and external locations. The network should support secure and resilient connections with integrated secure VPNs, firewalls, intrusion detection systems and high availability configurations to ensure services are available in the event of an issue or outage.

In terms of compute and storage solutions, there has been a move from traditional server and storage infrastructure — a “build-it-yourself” mind-set — to the new converged infrastructure, which has pre-packaged server, storage and network products in an integrated stack with predefined sizing and growth options. This can be an accelerator, as these converged infrastructures are pretty much ready to go and can be deployed like an appliance, versus the traditional months of negotiating with vendors, arguing the values of preferred products, and delay in knitting all this together in the data centre. So with converged infrastructures, job done.

Well, not quite. The issue with converged infrastructures is that they come at a price. Typically the products used are enterprise grade, designed not to fail, and have the support and backing of major vendors. Today, these elements are being challenged by the application space, with new apps that are self-healing and able to operate at web scale. Therefore, all the resilience and high-end products in the Infrastructure layer are just adding cost to the architecture. Hadoop is a prime example of an open source product that is designed to be built on commodity hardware products; if a server fails, you throw it away and put in a new one. The cluster reforms and off you go again. As we look at email and other business applications, there are more of these cluster-based solutions that are challenging the infrastructure to keep up and meet the cost-to-function needs of the business.

If we also consider that not all applications need to be hosted in your own data centre, you start getting into hybrid solutions. Although you may wish to host your critical production applications and data in your controlled facilities, this typically means that those parts of your business get caught up in the change control and restrictions typically imposed in the data centre. However, less sensitive environments like development and testing can be hosted outside your facilities. If you use cloud-based services, you can typically improve response times and decrease your time to market.

Link to Part 6 Other Considerations and Conclusion

Next Generation IT: Business Process Part 2 of 6

Business Process

It does not matter if the process is an industry vertical business process or a cross-industry business process. Either way, the process is the output — the deliverable — that will define success or failure for any business. All the other layers are merely enablers to get you to this point.

At this layer we consider asset management, supply chain / order processing, marketing, financial management, customer relationship management and knowledge management, to name a few processes. With unique industry and business requirements, we can see how the Business Process layer requires many variables and strong industry expertise. So what about Next Generation solutions? There are well-established application products that support these processes, but consider the extra value pieces like analytics. In most cases today the Business Process layer is supported by legacy business intelligence reporting that tells you what has been rather than what could be. This comes from the data locked inside internal systems. What about all the data you know has value and insights but are not able to access?

The first aspect of Next Generation solutions is the addition / supportability of Big Data Analytics, recognizing there is more data that can be analysed to better answer critical business questions and help solidify business decisions. Although the execution of these analytic queries will be done in the Application and Platform layers, the query itself is based on key performance indicators and metrics that require deep business knowledge and the ability to translate this knowledge into an executable hypothesis.


If we think about other aspects of a solution that is defined in, and directly dependent on, the Business Process layer but is executed in the other layers, then Business Continuity Planning must be a factor. This is about aligning the response of the technology, people and process to the impact of an outage or disaster that cripples the business. This comes down to knowing the acceptable Recovery Time Objective (RTO) for the critical business applications. For example, a patient record system for a health provider can only accept a very small RTO; the technology, people and process have to be designed to handle this, so the solution is generally not cheap. However, a small manufacturing business can tolerate a less aggressive RTO. Although it needs to get production back up and running, the business can use alternate technologies, reducing the cost and enabling the business to meet its customer SLAs.

Another aspect of the Business Process layer to focus on is enabling people to work better with the business applications as well as collaborate and share information and knowledge. Mobility (discussed later) should be considered to empower the workforce, drive the business forward and respond quickly to changing business situations.

Link to Part 3 Application

Next Generation IT: What is it, and How Do I Do It – Part 1 of 6

Next Generation IT: What Is It, and How Do I Do It?


What does “Next Generation IT” really mean? This paper takes a holistic view of the various layers and components that comprise Next Generation IT and guides CIOs and CTOs on what to look for in modernising their applications and creating Next Generation IT solutions. While the layers and components are important in and of themselves, the real value comes from integrating all the pieces without technology bias.

Keywords: Big Data, Business Continuity, Mobility, DevOps, Service Management, Security, Orchestration, Open Source, As a Service, Modernisation, Business Process, Application, Platform, Infrastructure


What is Next Generation IT?

Next Generation IT solutions are becoming buzz words in the IT arena. Your business has to be considering Next Generation or you going to be Old Generation and that will never do. But what is Next Generation IT and how does it fit into my existing legacy IT estate? I can’t just rip and replace everything I have installed over the last 15 years; my CFO will have a heart attack! But every time my CEO meets with analysts or our vendor partners, all are energised to improve time to market, reduce cost, or improve operational efficiency by simply deploying Next Generation IT.

So what is Next Generation IT? What does it consist of, how it can it be placed within your IT estate, and how will your business benefit? This paper addresses these questions, targeting CTOs and CIOs who are considering adding or renewing part or all of their IT estate. This paper focuses on issues to consider to help your business move towards a more agile, scalable and future-proof IT estate.

Start with the Stack

StackThis stack diagram is not new; it forms the foundation for where we place and consider technologies and solutions. Before we get into detail, let’s align some terminology. I like using the familiar Lego bricks example. Think of a single technology product as an individual block — i.e., server, network switch, ERP app, etc. A solution is the combination of technologies integrated together to solve a business need. Solutions can only work if the pieces integrate successfully. Lego bricks link together because they have standard interfaces (you cannot link a Lego brick and Duplo brick together.). All Lego bricks can link together to create complex designs. If we go back to our four layer stack, we can consider both technology and solutions that fit into each layer. The focus of this paper is on the solutions versus the technology, as these are a key aspect of Next Generation IT.

Let’s start from the top and outline what should be considered when thinking about Next Generations solutions.

Link to Part 2 Business Process

Connecting the Boxes


As we develop IT solutions, it is very easy to focus on the core elements: infrastructure, platform and application layers, and the big components such as storage and compute, ERP and middleware technologies. However, as we think about architectures and systems integration, focusing on the connectivity of the data and application is critical to a successful deployment and to satisfying both operational and regulatory requirements.

This focus on connectivity is particularly important as we move to modern, cloud-based applications. In today’s architecture we worry less about the basic interoperability of big components because the vendors typically have this well covered. Unless you’re trying to put the proverbial square peg in a round hole, your risk is low. As we look to make our applications more agile and consider moving workload from public cloud to private cloud or hosted solutions, and as we think about moving from testing to production, what we need to worry about more is the connection between data and applications. Is the line that connects these boxes well designed for today and tomorrow?

Consider the plumbing in your house. Would one type of pipe and fittings handle high and low pressure water, gas and oil-based systems? Fittings and pipe structure need to be designed specifically to ensure they integrate and operate with the appliances they connect. Now consider an IT architecture. Don’t confuse the lines that connect the boxes as being the network cables or network connection protocols. The OSI model handles these connections up to layer 4, typically in the infrastructure layer. The layers I want to focus on are those that deal with the data transportation between applications (layers 5-7), where the lines between the boxes are the protocols and APIs that connect the applications together.

These connections need to not only function as interconnections between applications but also take on the attributes of the overall solution. For example, if you are operating in a secure regulated environment, you must ensure you are using secure protocols (e.g., SSL, SFTP, HTTPS, SSH), making sure that data is encrypted as it moves between applications. Or if writing APIs, then Java with the Java Cryptography Extension (JCE) can be used to secure the data connections through encryption.

As part of the design when considering APIs and protocols, strive to future proof yourself. As we have seen in the Web space, RESTful APIs have become the protocol of choice. Risk is reduced around application integration, availability of skilled resources and support from the application vendors, providing flexibility and adaptability for future developments.

Consider a client looking to migrate from their legacy applications to new modern apps, migrating both platform and hosting to cloud-enabled solutions. A critical aspect is ensuring that connectivity for the migration itself and then the migrated operational components is enabled. Part of the success of most application modernisations projects is based on the ability to move and reconnect new applications and data sources into the legacy estates.

As we look forward we are already seeing products, both commercial and open source, that help solution designers interconnect their applications through common data connectors and APIs.  One to draw your attention to in the open source space is EzBake, developed by 42Six Solutions (a CSC company). EzBake is in the final stage of being launched. This open source project aims to simplify the data connectivity, federated query and security elements within the big data space.  There are already public cloud-based platforms that enable you to buy a service that connects your data source to a target through a common set of APIs and protocols. EzBake will likely sit in the private cloud space, focused on connecting big data applications and data stores, but the ability to make these application and data connections easily is usable across the IT landscape.

It all comes down to the line connecting the boxes. Ensuring that this is given as much thought and consideration as the data and applications when designing a solution will pay dividends, enabling your architecture to integrate and operate successfully. And with correctly chosen protocols, your solution will be future proofed for the next integration or migration project.

Big Data Made Easy — Right Cloud, Right Workload, Right Time

Author: Carl Kinson

Over the past five to six years, the platform and infrastructure ecosystem has gone through some major changes. A key change was the introduction of virtualization in infrastructure architecture. This not only provides a consolidation solution and, therefore, a cost-saving answer, but it also enables orchestration and management of the server, network and storage ecosystem.

Before virtualization, the ability to dynamically stand up, configure and scale an environment was time-consuming, manually intensive and inflexible. Now, virtualization is embraced by many organizations, and its use to support business growth is commonplace. If we wrap some utility commercial frameworks around this, pre-package some servers, storage and network architecture, and a support framework, we have the makings of an agile, scalable solution.

This all sounds perfect, so where can I buy one?

One what? Well let’s call this new world order “the cloud” But which cloud? There are many different cloud solutions: public, private, on-premises, off-premises and everything in between.

Your choice will depend on your workload, need for regulatory compliance, confidence level and finances, so it’s unlikely that just one cloud solution will solve all your needs. This is not uncommon. The reality is that today’s businesses have a complex set of requirements, and no one cloud will solve them all. Not yet, anyway!

The next step is to determine how to align your business needs with the right cloud and how to provision the cloud to deliver your business applications — gaining benefits such as reduced effort and complexity, a standard process, and an app store type of front end.

This is the role ServiceMesh Agility Platform is designed to fill. ServiceMesh, CSC’s recent acquisition, is able to orchestrate multiple clouds with predefined application blueprints that can be rolled out and deployed on a range of public and private cloud solutions.

Great, but how does it help my Big Data Projects?

Since big data is enabled by a collection of applications — open source and commercial — we can now create application blueprints for deploying big data solutions rapidly on the cloud. Let’s concentrate on big data running on a cloud infrastructure. (Quick refresh: Hadoop drives the infrastructure to commodity x64-based architecture with internal dedicated storage, configured in grid-based architecture with a high-performance back-end network.)

Today, some companies are running Hadoop clusters in the public cloud on providers such as Amazon and Google. For the longer term, those companies will discover that scaling issues and regulatory compliance will prevent this from being a single-answer solution. At the other end of the spectrum, Yahoo, Facebook and LinkedIn environments are built on dedicated petabyte-scale clusters running on commodity-based architecture. Although we see some very large clients with this kind of need, the typical big data deployment will sit somewhere between these two bookends.

With a controlled virtualization technology to underpin the dedicated Hadoop clusters at scale, configured in a way that does not have an impact on performance, ServiceMesh can be used effectively to provision and manage big data environments. This can include delivery on public clouds such as Amazon. Also, through the use of big data blueprints, the same solution can be deployed with on- and off-premises cloud solutions, enabling you to choose — through an intuitive interface — the right hosting platform for the workload you are trying to align with.

Does the combination make sense?

Absolutely. The intent of big data is to focus on driving business value through insightful analytics, not provisioning and deploying Hadoop clusters. If we can simplify and speed up the provisioning process, we can align the workloads with the most appropriate hosting platform. The complexity of deploying and configuring big data solutions requires key skills. Seeking to do this on multiple environments can become time-consuming and very difficult to manage. That’s why the use of an advanced orchestration tool can reduce your resource overhead, costs and errors, while also letting you operate more quickly. Creating this kind of environment is a specialty task. CSC Big Data Platform as a Service can manage multiple clouds, scalable workloads faster with limited upfront investment for you to derive the right insights, to be the best at your business.

Disruptive Technology in Big Data: Not Just Hadoop


By Carl Kinson

You’ve heard the names: Pig, Flume, Splunk, MongoDB and Sqoop, to name a few. And Hadoop, of course. They carry funny names that make us smile but they represent disruptive technologies in big data that have proven their value to business. And that means they merit serious consideration for what they can do for your company.

To get the business the value out of the data you are not currently mining, you should consider how to introduce big data technologies into your business intelligence / analytics environment. Some of the best-known big data implementations are centered on Hadoop, which handles some truly massive amounts of data. For instance, eBay uses a Hadoop cluster to analyze more than 40 petabytes of data to power its customer-recommendations feature.

Hadoop is part of the solution in many cases, but today it is hardly the only one. To begin with, Hadoop is a batch-oriented big data solution that is well suited for handling large data volume and velocity. There are some applications where a company can justify running an independent Hadoop cluster, like eBay, but those instances will be the exception. More often, companies will get more value from offloading data into Hadoop-type environments, acting as data stores, running map-reduce jobs and seeding these outputs into the traditional data warehouses, to add additional data for analysis.

Well-established commercial vendors in the ERP/structured data space, such as IBM, SAP and Oracle, have all quickly embraced the Hadoop wave. Examples include SAP HANA + Hortonworks, IBM PureData + IBM BigInsights and Oracle + Cloudera, to name but a few. (Hortonworks, BigInsights and Cloudera are all based on Hadoop, an open source product.)

Many companies, however, will derive more value from a hybrid solution that combines the batch-processing power of Hadoop with “stream-based” technologies that can analyze and return results in real time, using some of the disruptive products I mentioned at the start.

Consider a courier company that geotags its drivers. By combining real-time information about the driver’s location, route plan, traffic information and the weather, the company could reroute a driver if delays are detected on his or her intended route. This is something a batch-oriented system such as Hadoop isn’t designed to address. But using a “streaming” product allows this to happen in near real time.

Each technology on its own is already creating significant disruption in the marketplace. As more companies combine the power of batch- and stream-based big data products and analytics, the disruptive waves will likely grow considerably larger.

Now is a good time to consider how these big data products could be added to your environment, adding functionality and features to your business and helping you make your own waves.