Something to consider when mapping out your IOT strategy and putting initiatives into place... This info was presented to me by Tu Nguyen, a trusted advisor at Violin Memory. It made sense so I thought I would share it.
How fast can a company access the big data to turn it into meaningful & actionable insights?
The objective is to enable optimization in their business operations by analyzing large amounts of data as close to real time as possible in order to gain a market based competitive advantage that no company can afford to be without in a digital age. And (2) continuously improve productivity in all areas of business- applications-performance: each click, report & process is faster adding up over many employees. Also, employees will start doing more when they see the different functions working faster, leading to better decisions. It’s about latency. New & more workloads. With the freed up resources (time or hardware) the business can do new business or further refine their current business.
Too often, IT groups see their function as to “maintain” the medium for companies rather than “leveraging” technologies to “gain” competitive advantage and improve productivity or align with business objectives. These challenges present organizations with hard choices every day. When businesses can’t run reports as frequently as they desire, they must act based upon potentially stale information, increasing risk of a misstep. Traditional storage and SSD has been challenged to effectively deliver the performance required to meet the needs of mission critical applications and consolidation initiatives.
O&G invested heavily in SAP HANA. The objective is to enable optimization in their business operations by analyzing large amounts of data in real time. It can achieve very high performance without requiring any tuning. HANA puts the entire database in memory, so it required dedicated RAM-intensive servers. Call for data still have to exit the database server(s) to an external device. The constraint is how quickly or what yield (2X, 5X …10X) can O&G companies’ backend system (disk/IBM/EMC’s SSD) return the data.
The impact of poor database performance - One of the biggest bottlenecks in a database is I/O. Why? Because disk technology hasn’t really advanced in the last 10-15 years whereas servers and networking have been getting faster and more powerful. This means that databases now spent a lot of their timing waiting on I/O calls which equates to extra CPU being used and wasted application time. But year after year, the majority of organizations are over provisioning RAM, CPU and data centers related storage in hopes that they will drastically improve their mission critical applications latency.
Likewise, solid state drive (SSD ), designed for sequential read and write operations, struggle under the load and often deliver high latency and slow application performance, alienating customers and users, delaying business operations, and slowing time to business intelligence. Organizations try various techniques like wide striping, short stroking, over provisioning (a.k.a risk mitigation), and caching to improve performance, but these techniques all have an impact on the business. When more IOPS are required, organizations must invest capital in additional storage infrastructure, which results in excess, unutilized capacity. Maintenance, tuning, and ongoing optimization all require additional operational expenditures to maintain the additional performance.
Why disk & SSD are based storage no good for latency? Disks are mechanical with moving parts which have long seek times for random I/O workloads in particular. When a block is required the mechanical arm has to move to where the block is stored and read it. In random I/O workloads blocks are stored all over the place therefore seek times are much longer and the latency increases as more calls are queued up. Over the years storage engineers have had to use methods such as short stroking (Using only outer sectors so the head doesn’t have to move a lot) to achieve the lowest possible latencies but these techniques increase costs in DC environmental such as floor space, cooling and power so is actually less cost effective per gigabit. Many storage vendors also include caching in their SANs which give an application the illusion of low latency but in a fact that data hasn’t even hit the disk yet.
Why is latency important? One of the key successes in any application environment is to read or write the data as fast as possible so that there is a high level of productivity. While a data block is being read and placed in the buffer cache, the session that wants this data is sitting around waiting until the operation is complete using up resources such as CPU. In other words, 80% of the time, the CPU is waiting for data to process while the data in your data centers is 80% in wait state. In plain English, organizations are overpaying for 80% of their infrastructure.
But what is the cascading effect of poor database performance for the business?
Example 1 - The impact of a slow database is not just an IT problem but also a big business problem. When a database runs slow, for example due to an inefficient storage subsystem, it creates unacceptable response times and excessive waiting for the application users. In some cases these applications could be other applications or end-users for example in a call centre or sports betting engine. If these other applications or end-users have to wait long times for their actions to be processed then their productivity will drop. Productivity is directly linked to a business making money. The more productive the business is the more revenue they make. Therefore if the application performance leads to lost productivity then ultimately the business will make less revenue. Let’s translate this to a realistic scenario of an online sports betting Navajo owned company which uses disk or SSD based storage to store the database.
They rely on their online betting system to make money. Simply put, the more people that bet the more money they make. As you can imagine the business relies heavily on the database to store the events, odds, bets, account etc. and the application on top of that to process the data. Here’s the flow of a bet by a punter from their point of view:
Each of these three basic steps involves the application interacting with a database and during busy periods can be I/O intensive. So if the database is performing poorly because the disk based SAN can’t keep up and it takes longer for the application to perform the tasks of login, read events & odd and place bet, this leads to lost productivity of the betting engine. This will cause the customer to become dissatisfied and maybe use a competitor’s site instead or place less bets within a time frame. If less bets are being placed then, you guessed it, the business will make less money. If the average customer places 10 bets in 10 minutes on a quiet day but on a busy day can only place 5 bets in 10 minutes then the business loses half the revenue in that time. If each bet is say $10 on a good day that’s $100 but on the bad day this reduces to $50. Now if 100 customers are doing the same thing every 10 minutes then the business will lose $500 in those 10 minutes all because of poor database performance due to a slow disk based storage subsystem.
The impact of fast database performance - So how does can we fix this problem and turn the waterfall into a success story. In this case I am talking about this problem being related to an I/O bottleneck so this has to be fixed. Violin’s all flash architecture provides the low sustained microsecond latency, high IOPs and high throughput required to achieve high database performance. Therefore data can be read and written much quicker but also consistently during peak loads. This leads to a new cascading effect called “The impact of fast database performance” where a faster database increases application performance and increases productivity. Increase the productivity of any business and you know have increased revenue.
So using the same betting example for when the I/O bottleneck has been removed. During the busy periods the punter can now quickly log on, check the events & odds and place the bet. They are now placing the average 10 bets in 10 minutes during these busy times which means the business if now making their $100 again; an increase in revenue by 50%. 100 punters now equals an extra $500 every 10 minutes. Actually because I/O is now so much faster and there’s no storage contention the company is able to increase their betting engines productivity to 15 bets every 10 minutes per punter ;-) That’s an extra $50 per 10 minutes for each punter and for 100 punters that’s another $500 in the same time frame.
So by using Violin Flash Fabric Architecture (FFA) flash technology businesses can change the waterfall of “The impact of poor database performance” into a success story and increase their revenue.
Furthermore, latency has an enormous impact on organizations’ competitive edge. (1) In a world where real time content and analytics is getting popular the need for lower latency storage is growing. Why? The basic idea behind the phrase “big Data” is that everything your clients’ customers or your customer do is increasingly leaving a digital trace (or data), which you and your customers can use and analyze. Big Data therefore refers to that data being collected and you or your clients’ ability to make use of it. The datafication increases your ability to analyze data in a way that was never possible before. The benefits of big data are very real, and truly remarkable. Data has the power to change behaviors – more than education or prohibition ever could. This emerging trend in Big Data will be driven by you or your clients’ who will stand to profit from it. And (2) for a business, as the latency values increase with the workload, the impact on their SLAs can be dramatic and potentially lose customers or revenue.