Putting IT in the Driver’s Seat with APM at one of the Largest Rental Car Agencies

By Linh C. Ho (twitter: @linh_ho_nyc)

You see them at every airport, but it wasn’t until I met with our customer at one of the largest car rental companies in the world that I realized just how large an operation they run to smoothly and efficiently connect their customers with rental cars. They are a multi-billion dollar business, with over 6,000 local offices, and a fleet of more than half a million rental cars with logistics to manage. And they still manage to keep track of whether or not I’ve topped off the tank upon return…

Customer satisfaction is a huge priority for them, so they invested in traditional APM years ago to keep tabs on application performance. They need to know when there’s a problem, and how to solve it before it impacts customers. In fact their senior manager of IT Performance Monitoring told me:

“Every 30 minutes of downtime on one business-critical application alone could cost $10 million! That’s millions of dollars within a few minutes if an application is not available or performing as it should. So IT plays an extremely important role in the company’s success and revenue stream.”

Their IT department alone is over 1,500 employees! Talk about a lot of cooks in the kitchen when something goes wrong… when there is an issue, a team of more than 25 people – application developers, release/test managers, capacity managers, application performance engineers, and middleware experts- would gather for an “all hands on deck!” call to figure out the root cause of the problem. They urgently needed to improve process efficiency in problem resolution. They needed a solution that would complement existing monitoring tools, but also add visibility into car rental transactions and context end-to-end, so across all tiers with relevant metrics. That way they could proactively isolate and resolve issues before there was any impact on the business.

After considering several APM solutions, they selected OpTier for its transaction-based approach to application performance. Building on years of experience capturing user transactions end-to-end, OpTier brought visibility both horizontally across the business transaction and vertically with deep diagnostics that enabled them to upgrade their application performance. Knowing the context of each transaction, not just isolated bits of silo-based metrics, significantly accelerates problem resolution.

Now from the moment a problem is detected, IT knows exactly which users and transactions are affected, which part of the infrastructure is causing the problem, down to specific method calls or SQL queries or network segments and protocols involved.

“We can immediately call the right expert in to resolve the problem, dramatically reducing MTTR and virtually eliminating expensive finger-pointing and all-hands calls. We’ve reduced our ‘all hands’ meetings from 25 people to less than 12, resulting in a productivity increase whereby getting the right people involved from the get-go, others can stay focused on other more innovative projects rather than sitting on a conference call,” their senior manager of IT Performance Monitoring told me.

Reporting has also improved dramatically with OpTier – from days to minutes! And these reports are shared with the right stakeholders; development, application teams and program managers before and after a release for change impact analysis to help release updates.

The bottom line is that knowing how your application is performing needs to be as certain as a full tank of gas upon return. Looking at application performance in isolation does not help protect $10 million every half hour! Employing a business transaction-based approach to application performance gives IT organizations like this one the visibility and business context it needs to keep the engine running smoothly.


September 7, 2012 at 2:47 pm Leave a comment

Get on that Big Data Bike and Ride!

By Bryan Painter

Over the last 3 weeks I have met with a number of OpTier’s largest customers and prospects.  A topic that always seems to come up is Big Data.  One meeting in particular that stood out to me was the one I had with the lead architect for Big Data at a very large and well respected consulting firm.  We were discussing their Big Data initiative and how it was quickly growing out of control because the line of business was very indecisive about what their goals were. On paper it seemed simple.  The need to get better information was there, the data was all around and the benefits of using this information to drive revenue for the company were obvious.  However there was a point where my client turned to me and said “you know what Bryan, as simple as this sounds…where do we start?”

Analysis Paralysis

When I reflected on that discussion I see a lot of customers asking that very question.  It’s ironic that Big Data empowers the analysis of massive amounts of data, yet most companies are over analyzing how to begin analyzing the data!  Facebook has said they process over 500TB of data a day…that’s a crazy amount of information.  Think about for minute, that’s like storing 10 Library of Congresses of information every day.  What’s even scarier is that pales in comparison to the data that exists between the 4 walls of most large enterprises.  In fact, social data is only.001% of the data generated today.  I can see why organizations have such a hard time figuring out where to start.

I shared with him stories of other OpTier customers who have experienced the same pains, and explained that the solution is like learning how to ride a bike when you were a kid.  You just have to get on it and start pedaling.  You are going to skin your knees and get some bumps and bruises, of course, but in 6 months you will be very good at it.  Will it be perfect, NO, but that’s why the Big Data engineers are called “Scientists”…Scientists experiment until they get it right.

Start Pedaling

Ok, so how do you “start pedaling” with Big Data.  I firmly believe it comes down to the data you chose to work with.  I recommend that you reduce the complexity and start with a more simplified, more contextualized set of data.  Will you have every data dimension to start?  No, but you can start to experiment, and the findings will begin emerging, shaping the rest of the project.  You see, the most time consuming aspect of analytics is normalizing all the data and trying to make sense of it.  Taking feeds from multiple sources and integrating them.   If you can take that step out of the equation by improving the source of data, you start demonstrating to the business real information and value faster.  You will be able to take off your training wheels and upgrade to a 10-speed in no time. And once you hit that stride, you can start going fast and expand to all sorts of areas.

So, just like most disruptive technology advancements in the last 20 years, you have to just get on that big data bike and ride…especially when the upside is livelihood of your company’s growth and future!

August 29, 2012 at 1:26 pm Leave a comment

5 Steps to De-Risk Your Data Center Consolidation

By Diego Lomanto

Showing positive value from consolidating a data center should be an easy win right?  Wrong.  In some cases, consolidations even ended up costing more money than the money they intended to save.  At OpTier we’ve talked many customers who have successfuly completed consolidation projects and dug deeper into the factors they share in common.  Here are a few tips you can leverage. (If you find this helpful, I recently conducted a webinar on 5 Ways to De-Risk Your Data Center Consolidation which you might want to check out as well.)

The Benefits of Data Center Consolidation

First, let’s recognize that the key benefit of a consolidation comes from the reduction of cost.  When you run multiple, overlapping data centers, each data center must be staffed, hardware assets are most likely underutilized and management personnel duplication probably occurs.  This stuff costs money.  And it makes the data center overly complex.  Ok, simplifying makes sense so far.  But why do they fail so often?

Typically, the primary cause of failure  is that there are many unknown inter-dependencies due to widely spanning business processes across data centers.   This means that the issues that may emerge are completely unclear, and IT has difficult preparing for them. However, there are steps you can take to de-risk your data center consolidation.

Step 1: Create Business Transaction Profiles – Mapping business transaction profiles aligns everyone’s objectives/expectations, and gives IT understanding of the impact of changes on the business users.  Start with generating business transaction profile similar to the one below from OpTier’s Always-on APM solution:

Step 2: Measure Service Levels and Demand Patterns –  Capture  baseline metrics before you start the project.  This ensures that the consolidated environment continues to deliver the required level of service to all users.  Here is a good screenshot of measured service levels and demand patterns.

Step 3: Monitor Resource Usage –  Estimate the capacity needed for supporting the new heterogeneous workloads in the consolidated environment with your existing APM solution.  Be granular.  This view will help you understand exactly how much capacity you need so you don’t over or under provision.

Step 4: Assess Consolidation Readiness and Risk –  Identify and assess the various risk factors involved in consolidation, and to provide solid mitigation strategies for each scenario.

Step 5: Validate Business Service Performance Before-and-After Migration – When you’re done you should be able to measure how the consolidation has impacted performance. Verify that the application topology and resource consumption KPIs haven’t been impacted by the migration.  Having this data is critical if and when service issues occur post-migration

How OpTier Always-on APM Can Help

Business transaction-driven APM solutions give you objective visibility into how each application operates and allow you to decide which applications to merge/outplace and the accompanying business impact.  They allow you to measure current SLAs and will provide you with tools to ensure that SLA levels will not drop during and after migration and conduct effective tests so you avoid surprises when you roll out the integrated/merged application to production.  Most importantly, they allow you to early on detect performance issues and solve them before they become slowdowns or outages.  OpTier was built for these purposes specifically.  It’s a great tool for data center consolidations.

Hope these tips helped you in thinking about your data center consolidation strategy and how to ensure its success.  If you want to hear more detail, check out the webinar as well, Do you have any tips to share? We’d love to hear them!

August 13, 2012 at 10:31 pm Leave a comment

How Twitter (and You) Can Avoid Crippling Outages

By Diego Lomanto

So as you all know by now, Twitter went down yesterday for about an hour.  Obviously, as one of the linchpins of the web ecosystem, any downtime at Twitter has a major effect on the internet economy.   Ray Wang, analyst at Constellation Research estimated that it could cost the internet economy as much as $25M per minute when twitter goes down.  So this downtime could have cost over $1b.  That is probably a bit of an over-estimation but I think it shows that some downtimes can be extremely costly.

In addition to hard costs, outages like this influence a customer’s faith in the company.  Given their reach, Twitter is probably safe for now. But if outages become frequent business partners may take their programs elsewhere.  And tweeters may choose another platform to share their innermost thoughts or monitor world events in real time.  That’s a long way off right now but a series of outages could have a devastating effect.  Just imagine a business that wasn’t so entrenched as Twitter – the consequences are much more immediate.  What if brokerage goes down for an hour and people miss trading opportunities.  Customer defections could be massive.

The cause of the Twitter downtime was not released.  Unfortunately, when Twitter goes down, they typically don’t share too many details of why.  A similar outage happened a month ago and the cause was identified as a “cascaded bug in one of our infrastructure components,”

So, what can we take from this?  Simple. Outages are bad (you knew that).  But most outages are avoidable and you need to make sure you have the right technology and procedures in place to detect them before they occur.  And when they do occur, resolve them as fast as possible.  Here’s where we make our shameless plug.  OpTier could help you (and Twitter) avoid and manage outages better.

We have learned from our customers that typically, (unless there is a natural cause such as a fire or a storm) outages don’t “just happen”. Something in the infrastructure breaks.  Be it a configuration change in a tier, a new JVM, a database doesn’t get indexed, etc.  The possibilities are endless.  Then, typically, the problem grows, affects more and more areas in the application, and eventually causes a full blown outage. Sounds like a “cascading bug” to me.  Since we track SLAs for tier threshold per transaction we can help identify the problem in a very early stages. In fact this early detection has led to some customers completely avoiding outages since they installed us.

How do we do it?  We have a patented technology, Active Context Tracking (ACT), which can track transactions as they flow through the infrastructure and through the entire lifecycle. OpTier automatically discovers transactions without modeling, scripting, rule writing or filter setting. There is no need for a separate “discovery mode,” since OpTier continuously discovers changes around the clock.  It just knows a transaction is occurring and follows it.

The key here is that it tracks transactions – not just applications.  Transactions may be failing (such as tweets).  But applications may be operating fine.  Just monitoring applications doesn’t give you enough information about problems in the environment and issues as they begin to arise.

For example, using ACT you can associate the business transactions that suffer from JDBC error 4 tiers deep.  Take the screenshots below, where a duplicate key entry causes bad response time for users. If you monitor the application server without the transactional context, everything seems OK. But if you follow the path of the transactions, there are errors downstream.

Here, in screenshot (click to enlarge) one we see transaction status.  Note all those checkmarks.  The transactions were successful but there were errors within the tiers.  Without an OpTier-like tool it’s likely no one would ever notice:

And in this screenshot (click to enlarge) we click on one of those errors, and dig into what the actual error was:

So, what a company like Twitter (or any company for that matter) needs to do is start thinking transaction management to avoid outages.  Implement technology that can assess system health in a holistic way, and as transactions start to fail, resolve the problem before it ends up shutting the system down.  OpTier’s way is pretty effective and had Twitter installed similar technology, there’s a good chance the cascading bug would have been caught in advance to keep those tweets flowing.

July 27, 2012 at 2:56 pm Leave a comment

The Big Transaction Data Model

By Diego Lomanto

In my last post, I spoke about Overcoming the Complexity of Big Data with Big Transaction Data.  In that entry we covered how using a data model that creates a singular data instance of each transaction with all the appropriate dimensions could help drastically simplify the analytics process.  What I’d like to do today is get a little bit deeper into that data model:  what dimensions of data are needed to achieve drastic simplification and how they can be captured.

Data Silos

The key problem that stands in the way of getting value from all that big data you collect is that what originated as your transactional data is fragmented across many different databases.  Because of this fragmentation– even across one single transaction – you can’t get to the actionable intelligence from the data.  To get value, you need to create a comprehensive data integration strategy – but that is time consuming, error prone and in many cases the data just isn’t there anymore because it was aggregated or decoupled from the other pieces making it’s re-linking impossible.

I recently spoke to a major retailer and they drew this diagram on a whiteboard to illustrate just how siloed their data is.  All of this information is scattered across all of these different databases.  There is a connection there – each transaction – but the amount of work to put them together after the transaction has been completed is enormous.

Big Data Silos

Let’s imagine you were selling loaves of bread online.  You would have your web analytics to measure what happened on the site, your CRM to capture data about the customer, APM tools to measure application performance, multiple logistics systems to process and track the order.  One loaf of bread creates so much disjointed data!

Dealing with these silos forces us to do so much work to prep, integrate and cleanse the data so it can be used to answer business questions.  According to this Gartner study (subscription required) , that massaging of the data could represent up to 80% of the time spent in an analytics project.  There’s got to be a better way.

The Big Transaction Data Model

This is where Big Transaction Data (BTD)comes into play.  The common approach to try to solve the messy integration problem is through a combination of data utilities and a lot of hard human work .  In fact, several data integration solutions exist just for this very purpose.  Now, obviously I am biased because I work for OpTier, but what BTD does is go back to the fundamental flaw – the separation of the data that occurs while transactions are processes by applications and solves this flaw at the core. It is a far more effective approach than trying to solve the problem post-data generation.  BTD creates a new, simplified data model that captures the relevant data dimensions while the transaction is occurring, so that you don’t have to deal with all that messy integration.

How? Active Context TrackingTM technology automatically tracks the transaction as it traverses web, application, middleware, database and other types of tiers, while collecting customer, performance, order, and business context data at each tier.  It generates a data set representing this new data model on the fly, in real-time, with all of these dimensions captured, while transactions flow through your application infrastructure.  The end result is a simple model representing the data that normally would require integration of 5 or more application data feeds into a singular transaction view.

So, instead of going back in time and prepping the data each time a new question emerges, you go right to the existing, simple Big Transaction Data when you have a curiosity about possible trends or a specific business problem to solve.  And, you can give this capability to business users directly so that they don’t have to go through many layers of red tape to get their data – since the main reason they rely today on data scientists is the need for the manual massaging of the data.

Instead of siloed data, you get something more like this:

Big Transaction Data Model

Big Transaction Data Model

Now, think about that loaf of bread example.  We have one transaction instance with all the information about that bread order including the web activity (what they put in the cart, how long they looked at specific pages, etc), the customer data, the application performance data, shipping and logistics information.  This singular instance is much easier to work with from a BI perspective.  What this means for you is that the new data model does not replace your datawarehouse approach to answering business questions – it actually enhances it by adding the transaction layer as a fabric that connects your data across the silos.  

Practical Uses

Big Transaction Data can be used as a data source for many initiatives from optimizing application performance management to marketing campaign and program analysis.  If you want to read a little bit more about it, check out this page on our web site.

June 25, 2012 at 12:10 pm Leave a comment

Information Transformation & Big Transaction Data

By Noel Clarke

If you’re reading this blog post – you probably followed a link that said something about “big data” or “big transaction data” and you were interested to find out more, so you clicked.

Welcome! I’m glad you clicked, I would have clicked too…<GRIN>

Lately the Internet has been buzzing about “Big Data”… “Big data this, Big data that. It’s everywhere. But I’d like to ask you to focus your attention on something a little less grandiose – I’ll call it “Big Transaction Data“. This is something that I think is VERY important, and I hope you’ll agree.  We’ve talked about this a bit before on this blog.  See “Simplifying the Complexity of Big Data“.

So what is Big Transaction data?  Let’s start in the middle of that phrase, with the word “transaction”. What is a transaction? If we visualize a concrete example i think that will be the best way to understand what a transaction is.

Remember the days BEFORE the Internet, I know it hurts when I think “that far back” also – but you can do it if you try…back… back… way back. Do you remember what you had to do when you wanted to buy something? You created a Purchase Order – you remember those forms printed on actual paper. You would handwrite or typewrite <SHUDDER> what you wanted to buy including; the product you wanted, the quantity desired, your shipping address and any payment terms. If you were ordering from a bigger company – Purchase Order forms would be printed on special paper that made multiple copies – usually one each of white, yellow, pink and blue.

Then you would take your copy – and forward the other copies to the supplier. And you would eagerly await your delivery.

When the supplier received the Purchase Order they distributed the colored copies to the different departments in their company so they all had a record of the transaction. Each group would dutifully process their copy for their records.  Most departments would simply re-key the data into their own individual systems. All of these copies would end up being filed into large filing cabinets for record-keeping and audit purposes. If you wanted to analyze this data you literally needed a forklift.

Accounting would capture the financial information, Order Fulfillment would note the products and quantities & shipping address, Sales would updated their database of your recent transactions. AH HA! There it is… A Transaction! 

These physical Transactions were how businesses make money, how they serviced their  customers, and how they grew their business. The proceeding example is a paper-based a transaction. In many ways transaction haven’t changed – they have just become digitized.
Fast-forward to the present and you will see almost all the same activities occurring – except in digital form. We have replaced the paper forms with web browser windows. And the rows of filing cabinets with online information systems.

So today we have CRM systems for sales order history, GL systems for financial information, ERP Inventory systems (sometimes calling mainframes or midrange) for production and warehouse information, and finally shipping and tracking systems for delivery confirmation.

Today’s transaction records are not stored in physical warehouses – they are collected from their  separate information silos – into an Enterprise Data Warehouses and then somehow business analysts try to mine this data for business intelligence. To glean insights from tiny information breadcrumbs to allow them to influence business outcomes into positive growth.

At OpTier we believe in a fundamentally different approach – we believe in transaction data. By tagging and following the transaction as it flows through your digital infrastructure we can deterministically measure all of the touch points across all the different systems that support your transactions. This produces highly accurate and near-realtime data about the all transactions, and all of the systems that are required to complete each and every business transactions. OpTier automatically produces – what armies of data analysts labor tediously to create – business aligned Data that can be used to affect business outcomes.

Big Transaction Data has several unique characteristics that allow businesses to acquire new customers, retain existing customers,  and grow profitably their business.

  • Acquire: The Line of Business will leverage data from the BTD solution to both internalize (understand) as well as externalize (communicate) their key value business drivers i.e. Speed in transaction execution = margin $ for customer.
  • Retain: The Line of Business have a need for data dashboards that provide insight into their customers processing transactions. This will drive greater customer satisfaction and instill confidence in their customers. This will lead to less client attrition which equates to greater Long Term Customer Value.
  • Grow Profitably: Sales has a need to understand, from a transaction perspective, how many transactions are run by their customers over historical periods to understand how they are growing the account and/or loosing transactions to competitors.

The best part is that this Big Transaction Data is automatically & immediately “aligned” to your business and your mission critical applications, their constituent transactions, and even the individual customer experiences – out-of-the-box! No Enterprise Data Warehouse, No daily ETL jobs, NO de-duping, NO master data management, NO data-cleansing, NO parsing & augmenting, NO correlation.

Since OpTier captures ALL Transactions in your production environments we add the word BIG and are calling it Big Transaction Data. Subscribe to this blog for more information on Big Transaction Data

June 12, 2012 at 9:14 am Leave a comment

Overcoming the Complexity of Big Data with Big Transaction Data

By Diego Lomanto

For most companies, the challenge with big data lies in making sense of the data acquired in order to apply it to real world problems when decisions matter most.  Big data is hot right now because we recognize that we are generating more data than ever before and that we might be able to do something with it.  However, much the execution of big data has been around storage of the data (think Hadoop) and search (think Splunk).  That’s a great start, but do they really solve any problems in a new way on their own?

Start a big data project and you will soon realize that the data itself is limited because it is partial (takes whatever is available), difficult to consume for analysis (because it’s unstructured) and often offers limited value use cases.  It’s complicated.

I think the evolution towards better value from the data is still in progress.  I think we’ll not only see continued progress in storage but I believe that technology will emerge to make working with big data feel a wee bit smaller.  What I mean by that is we’ll still collect the data at massive scales, but there will be technology that simplifies the big data into a model that is consumable by analytic applications.  In other words, it will transform the data to actually represent something that can be analyzed.

Big Transaction Data

Big Transaction Data (BTD) is a great example of this.   It is complete, comprehensive and correlated.  But it’s also usable.  Let’s have a quick primer on BTD.

What it is, effectively, is the data generated by transactional systems in raw form modeled to represent the unique end-to-end transaction that drove the data generation in the first place, and stored alongside millions, billions, trillions (insert your own “illion” here) of other transactions.  This is done by technology – typically business transaction management software that observes and reports on transaction performance at each tier.

This is REAL big data in action.  And that’s where business transaction data comes into play.  BTD takes the data and stores it in a consumable form for analytics.  The transaction becomes the anchor for the analytics process.

The Problem with Fragmented Data

For example, say you wanted to analyze the end to end process performance of a financial trade system.  The systems that execute financial trades are ridiculously complex.  Think of the most complex system you can think of  and then multiply it by 3.  Why?  Because they are using a mix of new and old technologies and it’s distributed across multiple tiers and managed by many different stakeholders.  So what you get his this hodgepodge of tiers to execute trades that is incredibly difficult to rationalize into a singular data set.  The unfortunate by-product of this is that your view of the trade transaction is really just fragmented data.  You can see pieces of the transaction performance but not really ALL of the transaction.

But, you still need to analyze trades across the tiers and processes as a single input into your trade effectiveness analysis.  So you do the best you can.  You go deep into the tier data and try to correlate it on your own within your own analytic model.  For example, you try to monitor cross-process fallout with a cool looking dashboard that gives you data on each process, but you don’t really do it well and miss a lot of cross-process issues.

Or you try to do a cost analysis.  Or a segmentation analysis.  Or a performance analysis.  But the work to create a singular data set is so complicated that you never really have full confidence in the results.

Big Transaction Data in Action

Here is a great opportunity to employ big transaction data.  Instead of working with billions of manually correlated data points, let’s simplify and work with millions of well-defined transactions instead.  End-to-end transactions that represent each trade across each process in full.  Now you have a data set that you can inject it into your BI platform or use simply use BI tools within the big transaction data solution itself for analysis.

So back to those 3 Cs.  The data is complete – that means all information is generated by BTM end to end one view.  It’s comprehensive – capturing ALL interactions. And, it’s correlated – it knows everything about vital meta-data such as user, tiers, etc. The result is easy to consume meaningful analytics leading to business outcomes.

So, big data is hot.  But it’s not quite there yet.  We’re waking up with more data but we’re still working to rationalize it.  Fortunately, the technology is on its way to simplify and gain more (true) value from big data.

May 9, 2012 at 9:11 am Leave a comment

Older Posts

OpTier on Facebook

OpTier Application Performance Management

OpTier Twitter