Next Generation IT: Considerations and Conclusion Part 6 of 6

Other Considerations

Although we have covered the four main areas of Next Generation IT solutions, there are three other key elements to consider: Open Source, Mobility and As a Service. Let’s tackle each of these to help complete the picture.

In the past, Open Source was viewed as something that looked interesting, but when it came to mainstream production, commercial applications won the battle in terms of support and ownership. Since the commercial products came with support and maintenance contracts, we had the proverbial “throat to choke” if something went wrong. However, the open source world started to make its mark over 12 years ago with the introduction of the Linux operating system. Although it took 4-5 years to establish, we would now consider Linux as one of the preferred operating systems for data centres and application hosting. If we leap frog to today, we are seeing a range of open source products hitting the market and being considered for key production-based workloads. One of the main areas is around Big Data and the introduction of Hadoop, developed out of Yahoo and matured through the Internet Service Providers. This open source product has revolutionised business analytics. With open source products like Hadoop, the risk of feeling disconnected from the developer / product owner or having no real support framework is now mitigated by vendors providing third-party consulting support for your implementations. So you have the look and feel of a commercial product with the flexibility and resources of a crowdsource-developed open source product.

Mobility is a key feature today for any application. How you push data and information to your workforce is critical to productivity. At the same time, collecting data from the mobile workforce is beneficial to business operations. Enabling people to have access to systems securely and quickly means that the workforce can always be online and can operate at, or close to, 100% of their in-office productivity. As business applications enable the mobile workforce to access sales data, ERP and CRM systems, we should also consider pushing information about system operations and threat analysis so that events can be handled pro-actively versus reactively.

As a Service can be nicely aligned to cloud delivery models. The reason for raising the As a Service factor is its valuable purchase model and innovation potential. Traditional purchase models are good for businesses with large Capex budgets, but these are few and far between nowadays. And even the Capex-rich business are still looking to spend wisely and have a more predictable commercial model. The As a Service model allows businesses to buy defined services at an agreed-upon unit rate, charged on a consumption or allocation basis, typically with minimum volume and time commitments. Once over the minimum levels, organisations can flex up and down their usage to meet the peaks and troughs of the business. An example is retailers who need to increase their online ordering systems during the holiday season. With a traditional Capex model the retail organisation would have to purchase IT systems to handle the highest utilisation rate; therefore, during quiet times (i.e., non-holiday seasons) the systems would be underutilised. The As a Service model frees up funds for the organisation to spend on new, innovative solutions that drive the business forward rather than maintain the status quo.

Conclusion: Hybrid Stacks and The Art of the Possible         

There is no single solution stack that will address all your needs. Understanding this will allow you to think about each layer and what is needed to provide the right hosting platform, the right security and management services, and the right application delivery and development frameworks to meet the needs of the business process or question you are trying to resolve. The fact is, you will have a hybrid solution stack that combines public and private cloud solutions. Where possible, migrating to new, more agile platforms will provide future proofing and enable easier integration with other solutions. This makes good business sense, as every business must focus on maximizing the value of its applications and data, whether held internally or externally.

We started out by asking: What is Next Generation IT? Next Generation IT may be the latest buzz word, but what is new today is old tomorrow. That said, we can define Next Generation IT by focusing on some key areas:

  • The adoption of Cloud technologies and services is pivotal to Next Generation IT, whether for infrastructure, platform or business application services.
  • Cyber Security is always a threat. Ensure that the solutions and services you buy or build provide adequate levels of protection for your business and your clients.
  • To help businesses make better decisions, the ability to mine and query a wide variety of Big Data is critical to achieving better insight into business operations and direction.
  • Mobility should be a consideration across your application landscape, enabling the workforce and client base to operate from any location and feel connected to the business. This is essential in today’s world.
  • In order to achieve these business gains, enterprises must move forward with Application Modernisation, which should be treated as a driver of business change.

When taking on this journey, work with system integrators and service providers who can work with confidence across public and private cloud services, are able to operate from the Business Process layer to the Infrastructure layer, and can consider the service management and security wrappers that are needed. As open source products mature, consider them as a way to avoid vendor lock-in, which is key to having a more flexible and agile future. Above all, talk to your business not about the restrictions of legacy ball-and-chain infrastructure but about the art of the possible with Next Generation IT solutions.

Link to Part 1

Next Generation IT: Application Part 3 of 6

Application

We could simply break down the Application layer into industry-specific and cross-industry applications and be done with it. However, we need to ensure that applications support Big Data, Business Continuity and Mobility. This includes common APIs and protocols for easy integration. For this, consider the data and its relevance to the business.

One of the key challenges in the Application layer is that you may end up with application sprawl, and, depending on the size of the organisation and how long it has been operating, there is a likely chance you will have multiple applications performing similar if not duplicate tasks. This happens in large global organisations and presents big challenges to CIOs and CTOs who are trying to both consolidate applications and create a unified organisation.

Taking stock of your entire application landscape is key. It is typically not easy to retire applications, as you normally find that one or two business units depend on them and their productivity would stop. Just introducing new applications and asking people to start using them perpetuates the application sprawl; you end up with new and old apps, and data integration becomes people copying and pasting data from one application to the other. This is hardly productive, causes problems with data accuracy and consistency, and is a burden on the employees.

As you review your application landscape, the key concept to understand for Next Generation solutions is Application Modernisation, the practice of taking you legacy application estate and transitioning it to a new application and new platform, or upgrading to the latest versions to provide the features and functionality that business are expecting. The move could be small or large depending on your starting point and the end state you want to achieve. Many enterprises are looking to cloudify their apps, giving them a new platform and commercial framework.

However, we can now start to consider some of the various delivery mechanisms that can help us be more agile and improve our time to market. Let’s start with Cloud Apps, a key enabler in the Next Generation solution set. Although we typically think of Amazon and Google, there are many vendors and products in the enterprise cloud application space. Look at the success of cloud applications like Salesforce; five years ago we would have run for the hills at the thought of hosting our sales and other proprietary data on a public cloud.

A key focus now for CIOs and CTOs is how to migrate their legacy apps to the new cloud-enabled solutions. This can be an expensive but valuable exercise, as we see the maturity and coverage of cloud applications becoming the norm for a majority of businesses. This will provide a good stepping stone for future-proofing your estate and taking advantage of the new development and delivery processes like DevOps, which enable rapid development and roll-out of applications and code updates in a seamless and risk-free approach, making change the norm verses the exception. Anyone who uses Google or Amazon apps today knows that updates to their applications are rolled out without incident, new features or bug fixes are deployed continuously during the day, and no Change Control ticket or Service Outage notice is created. CIOs and CTOs want their business applications to inherit these principles that are rooted in the consumer space.

Link to Part 4 Platform

Next Generation IT: Infrastructure Part 5 of 6

Infrastructure

Infrastructure is the concrete physical foundation of any IT service. Don’t be fooled by the word “cloud.” Behind every cloud is a data centre with servers, storage devices and network gear. In the past we would take clients around data centres and show off shiny boxes and flashing LEDs. A lot of hardware vendors even made style design decisions about how sexy their product looked. Today you are less likely to walk around a data centre. Google and Amazon are good examples of cloud providers who would rather not discuss their infrastructure or data centres, though they have invested hundreds of millions of dollars to provide a global data centre footprint. So easy, right? Build one big data centre (two if you want redundancy), put all your applications and data into it, and job done.

Unfortunately, it’s not that easy; a combination of data regulations, regional restrictions and speed of access are some of the key considerations. This is why you see the cloud providers standing up more cloud data centres across the globe to handle these requirements. Your business may well be in the same position, and therefore you will end up with a dispersed infrastructure footprint.

Acknowledging that we need good infrastructure, what are the key considerations overall? Is it about the success of securely leveraging the resources you have? Let’s start with the data centre itself. This is a major investment, and running and maintaining these facilities must be considered. Data centres are key resources to be leveraged. Using physical segregated infrastructure within the facility can provide added security, ensuring there is no chance of data bleed between applications, business units or clients. If you have disaster recovery services, you typically need another data centre, suitably connected and managed.

Much focus in the data centre is on network connectivity. In today’s connected world, data no longer needs to flow just within the traditional Intranet networks. In fact, Intranet is becoming a thing of the past. Now we are simply connected; we need to connect to applications and data sources from both internal and external locations. The network should support secure and resilient connections with integrated secure VPNs, firewalls, intrusion detection systems and high availability configurations to ensure services are available in the event of an issue or outage.

In terms of compute and storage solutions, there has been a move from traditional server and storage infrastructure — a “build-it-yourself” mind-set — to the new converged infrastructure, which has pre-packaged server, storage and network products in an integrated stack with predefined sizing and growth options. This can be an accelerator, as these converged infrastructures are pretty much ready to go and can be deployed like an appliance, versus the traditional months of negotiating with vendors, arguing the values of preferred products, and delay in knitting all this together in the data centre. So with converged infrastructures, job done.

Well, not quite. The issue with converged infrastructures is that they come at a price. Typically the products used are enterprise grade, designed not to fail, and have the support and backing of major vendors. Today, these elements are being challenged by the application space, with new apps that are self-healing and able to operate at web scale. Therefore, all the resilience and high-end products in the Infrastructure layer are just adding cost to the architecture. Hadoop is a prime example of an open source product that is designed to be built on commodity hardware products; if a server fails, you throw it away and put in a new one. The cluster reforms and off you go again. As we look at email and other business applications, there are more of these cluster-based solutions that are challenging the infrastructure to keep up and meet the cost-to-function needs of the business.

If we also consider that not all applications need to be hosted in your own data centre, you start getting into hybrid solutions. Although you may wish to host your critical production applications and data in your controlled facilities, this typically means that those parts of your business get caught up in the change control and restrictions typically imposed in the data centre. However, less sensitive environments like development and testing can be hosted outside your facilities. If you use cloud-based services, you can typically improve response times and decrease your time to market.

Link to Part 6 Other Considerations and Conclusion

Big Data Made Easy — Right Cloud, Right Workload, Right Time

Author: Carl Kinson

Over the past five to six years, the platform and infrastructure ecosystem has gone through some major changes. A key change was the introduction of virtualization in infrastructure architecture. This not only provides a consolidation solution and, therefore, a cost-saving answer, but it also enables orchestration and management of the server, network and storage ecosystem.

Before virtualization, the ability to dynamically stand up, configure and scale an environment was time-consuming, manually intensive and inflexible. Now, virtualization is embraced by many organizations, and its use to support business growth is commonplace. If we wrap some utility commercial frameworks around this, pre-package some servers, storage and network architecture, and a support framework, we have the makings of an agile, scalable solution.

This all sounds perfect, so where can I buy one?

One what? Well let’s call this new world order “the cloud” But which cloud? There are many different cloud solutions: public, private, on-premises, off-premises and everything in between.

Your choice will depend on your workload, need for regulatory compliance, confidence level and finances, so it’s unlikely that just one cloud solution will solve all your needs. This is not uncommon. The reality is that today’s businesses have a complex set of requirements, and no one cloud will solve them all. Not yet, anyway!

The next step is to determine how to align your business needs with the right cloud and how to provision the cloud to deliver your business applications — gaining benefits such as reduced effort and complexity, a standard process, and an app store type of front end.

This is the role ServiceMesh Agility Platform is designed to fill. ServiceMesh, CSC’s recent acquisition, is able to orchestrate multiple clouds with predefined application blueprints that can be rolled out and deployed on a range of public and private cloud solutions.

Great, but how does it help my Big Data Projects?

Since big data is enabled by a collection of applications — open source and commercial — we can now create application blueprints for deploying big data solutions rapidly on the cloud. Let’s concentrate on big data running on a cloud infrastructure. (Quick refresh: Hadoop drives the infrastructure to commodity x64-based architecture with internal dedicated storage, configured in grid-based architecture with a high-performance back-end network.)

Today, some companies are running Hadoop clusters in the public cloud on providers such as Amazon and Google. For the longer term, those companies will discover that scaling issues and regulatory compliance will prevent this from being a single-answer solution. At the other end of the spectrum, Yahoo, Facebook and LinkedIn environments are built on dedicated petabyte-scale clusters running on commodity-based architecture. Although we see some very large clients with this kind of need, the typical big data deployment will sit somewhere between these two bookends.

With a controlled virtualization technology to underpin the dedicated Hadoop clusters at scale, configured in a way that does not have an impact on performance, ServiceMesh can be used effectively to provision and manage big data environments. This can include delivery on public clouds such as Amazon. Also, through the use of big data blueprints, the same solution can be deployed with on- and off-premises cloud solutions, enabling you to choose — through an intuitive interface — the right hosting platform for the workload you are trying to align with.

Does the combination make sense?

Absolutely. The intent of big data is to focus on driving business value through insightful analytics, not provisioning and deploying Hadoop clusters. If we can simplify and speed up the provisioning process, we can align the workloads with the most appropriate hosting platform. The complexity of deploying and configuring big data solutions requires key skills. Seeking to do this on multiple environments can become time-consuming and very difficult to manage. That’s why the use of an advanced orchestration tool can reduce your resource overhead, costs and errors, while also letting you operate more quickly. Creating this kind of environment is a specialty task. CSC Big Data Platform as a Service can manage multiple clouds, scalable workloads faster with limited upfront investment for you to derive the right insights, to be the best at your business.

Disruptive Technology in Big Data: Not Just Hadoop

2013-12-02

By Carl Kinson

You’ve heard the names: Pig, Flume, Splunk, MongoDB and Sqoop, to name a few. And Hadoop, of course. They carry funny names that make us smile but they represent disruptive technologies in big data that have proven their value to business. And that means they merit serious consideration for what they can do for your company.

To get the business the value out of the data you are not currently mining, you should consider how to introduce big data technologies into your business intelligence / analytics environment. Some of the best-known big data implementations are centered on Hadoop, which handles some truly massive amounts of data. For instance, eBay uses a Hadoop cluster to analyze more than 40 petabytes of data to power its customer-recommendations feature.

Hadoop is part of the solution in many cases, but today it is hardly the only one. To begin with, Hadoop is a batch-oriented big data solution that is well suited for handling large data volume and velocity. There are some applications where a company can justify running an independent Hadoop cluster, like eBay, but those instances will be the exception. More often, companies will get more value from offloading data into Hadoop-type environments, acting as data stores, running map-reduce jobs and seeding these outputs into the traditional data warehouses, to add additional data for analysis.

Well-established commercial vendors in the ERP/structured data space, such as IBM, SAP and Oracle, have all quickly embraced the Hadoop wave. Examples include SAP HANA + Hortonworks, IBM PureData + IBM BigInsights and Oracle + Cloudera, to name but a few. (Hortonworks, BigInsights and Cloudera are all based on Hadoop, an open source product.)

Many companies, however, will derive more value from a hybrid solution that combines the batch-processing power of Hadoop with “stream-based” technologies that can analyze and return results in real time, using some of the disruptive products I mentioned at the start.

Consider a courier company that geotags its drivers. By combining real-time information about the driver’s location, route plan, traffic information and the weather, the company could reroute a driver if delays are detected on his or her intended route. This is something a batch-oriented system such as Hadoop isn’t designed to address. But using a “streaming” product allows this to happen in near real time.

Each technology on its own is already creating significant disruption in the marketplace. As more companies combine the power of batch- and stream-based big data products and analytics, the disruptive waves will likely grow considerably larger.

Now is a good time to consider how these big data products could be added to your environment, adding functionality and features to your business and helping you make your own waves.