Six expert database performance tips and tricks

Size: px
Start display at page:

Download "Six expert database performance tips and tricks"

Transcription

1 Six expert database performance tips and tricks

2 Table of Contents Six expert database performance tips and tricks Chapter 1: Getting started with database for APM... 3 Chapter 2: Challenges in implementing an APM strategy... 7 Chapter 3: Modern enterprise databases explained Chapter 4: Top six database performance metrics to monitor in enterprise applications Chapter 5: AppDynamics approach to APM Chapter 6: APM monitoring tips and tricks

3 Chapter 1: Getting started with database for APM

4 Chapter 1: Getting started with database for APM Application Performance Management, or APM, is the monitoring and management of the availability and performance of software applications. An integral part of the APM equation includes an intimate understanding of the health of your databases and the performance activity surrounding them. While APM includes a deep understanding of the behavior of the application servers, the database is a core component of what makes your application and end-user experience successful. Thus, it is as vital to understanding your database metrics in the global APM picture as it is to understand your application and infrastructure metrics. Different people can interpret APM differently, so this article attempts to qualify what APM is, what it includes, and why it is relevant to your business. First, if you are going to take control of the performance of your applications, then it is important that you understand what you want to measure and how you want to interpret it in the context of your business. Then, we ll jump into which database metrics are vital to understanding to ensure optimal health of the databases that your applications interact with. We ll begin with an introduction to APM. What is Application Performance Management (APM)? As applications have evolved from stand-alone applications to client-server applications to distributed applications and ultimately to cloud-based elastic applications, application performance management has evolved to follow suit. When we refer to APM, we refer to managing the performance of applications such that we can determine when they are normally functioning and when they are behaving abnormally. Furthermore, when something goes wrong, and an application is behaving abnormally, we need to identify the root cause of the problem quickly so that we can remedy it. We might observe things like: The behavior of the application itself The databases that interface with the applications The physical hardware upon which the application is running The virtual machines or containers in which the application is running Supporting infrastructure, such as caches, queue servers, external web services, and legacy systems However, the list does not stop there. A vital part of your application infrastructure are the databases that power your applications. In addition to monitoring your application performance, it is crucial to under the performance behavior of your database metrics, including: Explain Plans Wait States Disk I/O Network I/O CPU Current & History data Sub-minute granular data Once we have captured performance metrics from all of these sources, we need to interpret and correlate them on the impact on your business transactions. This is where the magic of APM kicks in. APM vendors employ experts in different technologies so that they can understand, at a deep level, what performance metrics mean in each system and then aggregate those metrics into a holistic view of your application. The next step is to analyze this holistic view of your application performance against what constitutes normalcy. For example, if key business transactions typically respond in less than 4 seconds on Friday mornings at 9 am but they are responding in 8 seconds on this particular Friday morning at 9 am then the question is why? An APM solution needs to identify the paths through your application for those business transactions, including external dependencies and environmental infrastructure, to determine where they are deviating from normal. It then needs to bundle all of that information together into a digestible format and alert you to the problem. You can then view that information, identify the root cause of the performance anomaly, and respond accordingly. Finally, depending on your application and deployment environment, there may be things that you can tell the APM solution to do to automatically remediate the problem. For example, if your application is running in a cloud-based environment and your application has been architected in an elastic manner, you can configure rules to add additional servers to your infrastructure under certain conditions. Thus we can refine our definition of APM to include the following activities: The collection of performance metrics across an entire application environment The interpretation of those metrics in the context of your business applications The analysis of those metrics against what constitutes normalcy The capture of relevant contextual information when abnormalities are detected Alerts informing you about abnormal behavior Rules that define how to react and adapt your application environment to remediate performance problems Why is APM important? It probably seems obvious to you that APM is important, but you will likely need to answer the question of APM importance to someone like your boss or the company CFO that wants to know why she must pay for it. To qualify the importance of APM, let s consider the alternatives to adopting an APM solution and assess the impact regarding resolution effort and elapsed down time. First let s consider how we detect problems. An APM solution alerts you to the abnormal application behavior, but if you do not have an APM solution then you have a few options: Build synthetic transactions Manual instrumentation Wait for your users to call customer support!? 4

5 Chapter 1: Getting started with database for APM A synthetic transaction is a transaction that you execute against your application and with which you measure performance. Depending on the complexity of your application, it is not difficult to build a small program that calls a service and validates the response. However, what do you do with that program? If it runs on your machine then what happens when you are out of the office? Furthermore, if you do detect a functional or performance issue, what do you do with that information? Do you connect to an server and send alerts? How do you know if this is a real problem or an average slowdown for your application at this hour and day of the week? Finally, detecting the symptom is half the battle how do you find the root cause of the problem? The next option is manually instrumenting your application: adding performance monitoring code directly into your application and recording metrics to a database or logs on a file system. Challenges with manual instrumentation include: what parts of the Javascript code to instrument and how do I analyze it, how to determine normalcy, how to propagate those problems up to someone to analyze, what contextual information is relevant, and so forth. Furthermore, you have introduced a new problem: the perpetual maintenance of performance monitoring code into your application that you are now responsible for. Can you dynamically turn it on and off so that your performance monitoring code does not negatively affect the performance of your application? If you learn more about your application and identify additional metrics you want to capture, do you need to rebuild your application and redeploy it to production? What if your performance monitoring code has bugs? There are additional technical options, but what I find is that companies are most often alerted to performance problems when their custom service organization receives complaints from users. I do not think I need to go into details about why this is a bad idea! Next, let us consider how to identify the root cause of a performance problem without an APM solution. Most often I have seen companies do one of two things: Review runtime logs Attempt to reproduce the issue in a development / test environment Log files are excellent sources of information and many times they can identify functional defects in your application (by capturing exceptions and errors). However, when experiencing performance issues that do not raise exceptions, logs typically only introduce additional confusion. You may have heard of, or been directly involved in, a production war room. These war rooms are characterized by finger pointing and attempts to indemnify one s own components so that the pressure to resolve the issue falls on someone else. The bottom line is that these meetings are not fun nor productive. Alternatively, and usually in parallel, the development team is tasked with reproducing the problem in a test environment. The challenge here is that you usually do not have enough context for these attempts to be fruitful. Furthermore, if you can reproduce the problem in a test environment, that is only the first step, now you need to identify the root cause of the problem and resolve it! In summary, APM is important to you so that you can understand the behavior of your application, detect problems before your users are impacted, and rapidly resolve those issues. In business terms, an APM solution is necessary because it reduces your Mean Time To Resolution (MTTR), which means that performance issues are resolved swifter and efficiently so that the impact to your business bottom line is reduced. Evolution of APM The APM market has matured substantially over the years, mostly in response to changing application technologies and deployments. When we had very simple applications that directly accessed a database. APM was not much more than a performance analyzer for a database. However, as applications migrated to the web and we witnessed the first wave of application servers, then APM solutions came into their own. At the time, we were very concerned with the performance and behavior of individual moving parts, such as: Physical servers and the operating system hosting our applications PHP runtime behavior Application response time We captured metrics from all of these sources and stitched them together into a holistic story. We were genuinely interested in operating system reads and writes, network latency, and so forth. Not to mention, we raised alerts whenever a server went down. Advanced implementations even introduced the ability to trace a request from the web server that received it across tiers to any backend system, such as a database. These were robust solutions, but then something happened to change our world: the cloud. The cloud changed our view of the world because no longer did we take a systemlevel view of the behavior of our applications, but rather we took an application-centric view of the behavior of our applications. The infrastructure upon which an application runs became abstracted, and what became more important is whether or not an application can perform optimally. If an individual server goes down, we do not need to worry as long as the application business transactions are still satisfied. As a matter of fact, cloud-based applications are elastic, which means that we should expect the deployment environment to expand and contract on a regular basis. For example, if you know that your business experiences significant load on Fridays from 5pm-10pm then you might want to start up additional virtual servers to support that additional load at 4 pm and shut them down at 11 pm. The former APM monitoring model of raising alerts when servers go down would drive you nuts. Furthermore, by expanding and contracting your environment, you may find that single server instances only live for a matter of a few hours. You may still find some APM solutions from the old world, but the modern APM vendors have seen these changes in the industry and have designed APM solutions to focus on your application behavior and have placed a far greater importance on the performance and availability of business transactions than on the underlying systems that support them. Buy versus build This article has covered much ground and now you are faced with a choice: do you evaluate APM solutions and choose the one that best fits your needs or do you try to roll your own. I think this comes down to the same questions that you need to ask yourself in any buy versus build decision: what is your core business and is it financially worth building your own solution? If your core business is selling widgets, then it probably does not make sense to build your own performance management system. If, on the other hand, your core business 5

6 Chapter 1: Getting started with database for APM is building technology infrastructure and middleware for your clients then it might make sense (but see the answer to question two below). You also have to ask yourself where your expertise lies. If you are a rock star at building an ecommerce site but have not invested the years that APM vendors have in analyzing the underlying technologies to understand how to interpret performance metrics, then you run the risk of leaving your domain of expertise and missing something vital. The next question is: is it economical to build your own solution? This depends on how complex your applications are and how downtime or performance problems affect your business. If your applications leverage a lot of different technologies (e.g. Node.js, Java,.NET, Python, web services, databases, NoSQL data stores), then it is going to be a massive undertaking to develop performance management code for all of these environments. Finally, ask yourself about the impact of downtime or performance issues on your business. If your company makes its livelihood by selling its products online, then downtime can be disastrous. Moreover, in a modern competitive online sales world, performance issues can impact you more than you might expect. Consider how the average person completes a purchase: she typically researches the item online to choose the one she wants. She ll have a set of trusted vendors (and hopefully you are in that honored set), and she ll choose the one with the lowest price. If the site is slow, then she ll just move on to the next vendor in her list, which means you just lost the sale. Additionally, customers place much value on their impression of your web presence. This is a hard metric to quantify, but if your website is slow, then it may damage customer perceptions of your company and hence lead to a loss of confidence and sales. Conclusion Application Performance Management involves measuring the performance of your applications, capturing performance metrics from the individual systems that support your applications, and then correlating them into a holistic view. The APM solution observes your application to determine normalcy, and when it detects abnormal behavior, it captures contextual information about the abnormal behavior and notifies you of the problem. Advanced implementations even allow you to react to abnormal behavior by changing your deployment, such as by adding new virtual servers to your application tier that is under stress. An APM solution is important to your business because it can help you reduce your mean time to resolution (MTTR) and lessen the impact of performance issues on your bottom line. If you have a complex application and performance, or downtime issues can negatively affect your business, then it is in your best interested to evaluate APM solutions and choose the best one for your applications. This article reviewed APM and helped outline when you should adopt an APM solution. In the next article, we ll examine the challenges in implementing an APM strategy and dive much deeper into the features of APM solutions so that you can better understand what it means to capture, analyze, and react to performance problems as they arise. All of this is to say that if you have a complex environment and performance issues or downtime are costly to your business then you are far better off buying an APM solution that allows you to focus on your core business and not on building the infrastructure to support your core business. 6

7 Chapter 2: Challenges in implementing an APM strategy

8 Chapter 2: Challenges in implementing an APM strategy The last article presented an overview of Application Performance Management (APM), described high-level strategies and requirements for implementing APM, provided an overview of the evolution of APM over the past several years and provided you with some advice about whether you should buy an APM solution or build your own. This article expands upon that foundation by presenting the challenges to effectively implementing an APM strategy. Specifically this article presents the challenges in: Capturing performance data from disparate systems Analyzing that performance data Automatically, or programmatically, responding to performance problems Capturing performance data Modern distributed application environments leverage a wide variety of technologies. For example, you may have multiple application clusters, a SQL database, one or more NoSQL databases, a caching solution, web services running on alternate platforms, and so forth. Furthermore, we are finding that certain technologies are better at solving certain problems than others, which means that we are adding more technologies into the mix. To effectively manage the performance of your environment, you need to gather performance statistics from each component with which your application interacts. We can categories these metrics into two raw buckets: Business Transaction Components Infrastructure Components Measuring business transaction performance The previous article emphasized the importance of measuring business transactions as an indicator of the performance of your application because business transactions reflect the actions necessary to fulfill a real-user request. If your users can complete their business transactions in the expected amount of time, then we can say that the performance of the application is acceptable. However, if business transactions are unable to complete or are performing poorly, then there is a problem that needs to be addressed. Business Transactions can be triggered by any interaction with your application, whether that interaction is a web request, a web-service request, or a message that arrives on a message queue. Business Transactions are comprised of various components, or segments, that run on tiers: as a request passes from one system to another, such as by executing a web service call or executing a database query, we add the performance of that individual tier to the holistic business transaction. Once you have captured the performance of a business transaction and its constituent tiers, the fun begins. The next section describes analysis in more depth, but assuming that you have identified a performance issue; the next step is to capture a snapshot of the performance trace of the entire business transaction, along with any other relevant contextual information. Measuring infrastructure performance In addition to capturing the performance of business transactions, you are going to want to measure the performance of the infrastructure in which your applications are running. Unfortunately, the performance of your application can be impacted by additional factors than just the execution of your application code alone. To effectively manage the performance of your application, you need to gather container metrics such as the following: Server: CPU load, requests per second, memory usage Operating Systems: network usage, I/O rates, system threads Hardware: CPU utilization, system memory, network packets These are just some of the relevant metrics, but you need to gather information at this level of granularity to assess the health of the environment in which your application is running. Moreover, as we introduce additional technologies in the technical stack, such as databases, NoSQL databases, internal web-services, distributed caches, key/value stores, and so forth, they each have their set of metrics that need to be captured and analyzed. Building readers that capture these metrics and then accurately interpreting them can be challenging. Analyzing performance data Now that we have captured the response times of business transactions, both holistically and at the tiered level, and we have collected a conglomeration of metrics; the next step is to combine these data points and derive business value from them. As you might surmise, this is a non-trivial effort. Let s start with business transactions. We already said that we needed to generate a unique token for each business transaction and then pass that token from tier to tier, tracking the response time for each tier and associating it with that token. We need a central server to which we send these constituent segments that will be combined into an overall view the performance of the business transaction. Figure 2 shows this visually. Therefore, an APM strategy that effectively captures business transactions not only needs to measure the performance of the business transaction as a whole but also needs to measure the performances of its constituent parts. Practically, this means that you need to define a global business transaction identifier (token) for each request, find creative ways to pass that token to other services, and then access that token on those servers to associate this segment of the business transaction with the holistic business transaction on an APM server. Fortunately, most communication protocols support mechanisms for passing tokens between machines, such as using custom HTTP headers in web requests. The point is that this presents a challenge because you need to account for all of these communication pathways in your APM strategy. 8

9 Chapter 2: Challenges in implementing an APM strategy Server 2 Segment 2 Business transaction Browser Server 1 Segment 1 Central monitoring server S1 S2 S3 Server 3 Segment 3 Figure 2: Assembling segments into a business transaction Analyzing the performance of a business transaction might sound easy on the surface: compare its response time to a service-level agreement (SLA) and if it is slower than the SLA then raise an alert. Unfortunately, in practice it is not that easy. We want to determine instead what constitutes normal and identify when behavior deviates from normal. We need to capture the response times of individual business transactions, as a whole, as well as the response times of each of those business transactions tiers or segments. For example, we might find that the Search business transaction typically responds in 3 seconds, with 2 seconds spent on the database tier and 1 second spent in a web service call. However, this introduces the question of what constitutes typical behavior in the context of your application? Different businesses have different usage patterns, so the normal performance of a business transaction for an application at 8 am on a Friday might not be normal for another application. In short, we need to identify a baseline of the performance of a business transaction and analyze its performance against that baseline. Baselines can come in the following patterns: The average response time for the business transaction, at the granularity of an hour, over some period, such as the last 30 days. The average response time for the business transaction, at the granularity of an hour, based on the hour of the day. For example, we might compare the current response time at 8:15 am with the average response time for every day from 8am- 9am for the past 30 days. The average response time for the business transaction, at the granularity of an hour, based on the hour of the day and the day of the week. In this pattern, we compare the response time of a business transaction on Monday at 8:15 am with the average response time of the business transaction from 8am-9am on Mondays for the past two months. This pattern works well for applications with hourly variability, such as e-commerce sites that see increased load on the weekends and at certain hours of the day. The average response time for the business transaction, at the granularity of an hour, based on the hour of the day and the day of the month. In this pattern, we compare the response time of a business transaction on the 15th of the month at 8:15 am with the average response time of the business transaction from 8am- 9am on the 15th of the month for the past six months. This pattern works well for applications with date based variability, such as banking applications in which users deposit their checks on the 15th and 30th of each month. In addition to analyzing the performance of business transactions, we also need to analyze the performance of the backends and infrastructure in which the application runs. There are abhorrent conditions that can negatively impact all business transactions running in an individual environment. It is important to correlate business transaction behavior with environment behavior to identify false-positives: the application may be fine, but the environment in which it is running is under duress. Finally, ecosystem metrics can be key indicators that trigger automated responses that dynamically change the environment, which we explore in the next section. Automatically responding to performance issues Traditional applications that ran on colossal monolithic machines, such as mainframes or even high-end servers, suffered from the problem that they were very static: adding new servers could be measured in days, weeks, and sometimes months. With the advent of the cloud, these static problems went away as application environments became elastic. Elasticity means that application environments can be dynamically changed at runtime: more virtual servers can be added during peak times and removed during slow times. It is important to note that elastic applications require different architectural patterns than their traditional counterparts, so it is not as simple as deploying your traditional application to a cloud-based environment. One of the main advantages that elastic applications enable is the ability to automatically scale. When we detect a performance problem, we can respond in two ways: Raise an alert so that a human can intervene and resolve the problem Change the deployment environment to mitigate the problem There are certain problems that cannot be resolved by adding more servers. For those cases, we need to raise an alert so that someone can analyze performance metrics and snapshots to identify the root cause of the problem. However, there are other problems that can be mitigated without human intervention. For example, if the CPU usage on the majority of the servers in a tier is over 80% then we might be able to resolve the issue by adding more servers to that tier. Business transaction baselines should include a count of the number of business transactions executed for that hour. If we detect that load is significantly higher than normal, then we can define rules for how to change the environment to support the load. Furthermore, regardless of business transaction load, if we detect containerbased performance issues across a tier, adding servers to that tier might be able to mitigate the issue. Smart rules that alter the topology of an application at runtime can save you money with your cloud-based hosting provider and can automatically reduce performance issues before they affect your users. 9

10 Chapter 2: Challenges in implementing an APM strategy Conclusion This article reviewed some of the challenges in implementing an APM strategy. A proper APM strategy requires that you capture the response time of business transactions and their constituent tiers, using techniques like code profiling and backend detection and that you capture container metrics across your entire application ecosystem. Next, you need to correlate business transaction segments in a management server, identify the baseline that best meets your business needs, and compare current response times to your baselines. Finally, you need to determine whether you can automatically change your environment to mitigate the problem or raise an alert so that someone can analyze and resolve the problem. 10

11 Chapter 3: Modern enterprise databases explained

12 Chapter 3: Modern enterprise databases explained The last couple articles presented an introduction to Application Performance Management (APM) and identified the challenges in effectively implementing an APM strategy. Now, we are going to provide a general introduction to the various databases within your environment. Before we begin, let s set the stage for the particular types of databases we are going to cover. In this article, we ll provide an overview of what SQL and NoSQL are, and the various types of databases available for customers today. This section is not meant for the more advanced users. If you re already familiar with these basic concepts, feel free to skip this section and go straight to the top-6 metrics discussed in the next article. Otherwise, feel free to read the following for a good general introduction to the various database types. Structured query language SQL stands for Structured Query Language and is used for database access and administration. International corporations Oracle, Microsoft, and IBM, offer a broad range of cloud-based and local applications for data warehousing and enterprise resource planning. Data warehousing happens whenever data is stored in a database in any context, for any purpose. When you log into your , your computer sends your username and password to the server. The server then takes that information and queries the database for your username, meaning it searches the database for the entry (or row) with your username. A row has multiple data cells (or data entries). It is typical that the row for your username would have your password, last login time and other pertinent information corresponding specifically to your account. Other rows in the database are for other accounts. When the database receives a query from the server, it returns the information in those cells to the server. The server checks the password you entered with the text in the password data cell of the row with the username you entered. Now let s discuss the differences and advantages of different types of SQL databases and why a company would select one over another to meet its business needs. MySQL databases Your PHP application may be utilizing a backend database, a caching layer, or possibly When choosing a system to meet your businesses needs, you must consider benchmark performance. A benchmark is a standard that every implementation of a system follows. Because MySQL (owned by Oracle) was created with an eye towards speed, every implementation of a MySQL databases is fast and lean. So, MySQL has a benchmark performance standard for speed and efficiency. If your system needs to perform over a thousand database transactions per hour, such as a high volume online retailer, you would likely select MySQL for its reliable speed. If you want each of your customers to click purchase and have the transaction complete and next page load in a few seconds, you will need a database that is created for speed and optimized for quickly handling dozens of transactions per second, simultaneously. Also, MySQL can run faster on smaller computer systems. A computer with a 500 GB hard drive and 4GB of RAM might be able to run large server applications with no issues, but if you re working with older systems with fewer capabilities, or if you re working with embedded systems (mobile devices), you need to choose a database system that is optimized for fast performance on systems with limited resources. Fun fact: Many well versed MySQL developers do not know where the My originates. One of the developers of MySQL was a Swedish man named Michael Monty Widenius. He named MySQL after his daughter My. He named two other database products (MariaDB and MaxDB) after his other children. PostgreSQL databases and ACID compliance standards PostgreSQL is not owned by any corporation. It is open source and non-profit, much like OpenGL or Linux Ubuntu. A benchmark standard for PostgreSQL (or Postgres) databases is compliance with Atomicity, Consistency, Isolation, and Durability also referred to as ACID standards. Atomicity is a standard for being able to complete multiple database transactions, with each one s success being contingent upon on another. Imagine a customer purchasing a book on Amazon. When he clicks submit, the order is processed, and two events happen: His credit card is charged (one database transaction), and a book is reserved to be shipped to him (a second database transaction). Both of those things are contingent upon the other. If his credit card is declined or fails, no book will be shipped to him. If the software systems find that that book is out of stock or for some other reason cannot be shipped to him, his credit card will not be charged. Either both events occur, or neither event occurs. Atomicity is the standard that measures how capable your software system is of handing separate but co-dependent database transactions. Consistency is a standard that measure compliance with rules of how data can be changed. Let s go back to your account database entry. The last_login_time data cell has a timestamp that includes the date, time, and timezone of a transaction. If you run a database transaction that wants to put the word elephant in the last_login_time data cell, the database should throw an exception (meaning to return an error) and explicitly disallow that change to be made. Isolation is the standard that forces each database transaction to be made independent of one another. Let s go back to the customer purchasing a book on Amazon. If the book is a bestseller and has hundreds of copies sold daily, it is very likely that several customers could purchase a copy in the same minute and second. Consider a situation where Amazon has five physical copies of the book left, and ten individual customers attempt to purchase a copy at the same time. The database transactions will be handled in the order they were received. If the first five database transactions reserved a copy of the book, and the next five database transactions also tried to reserve copies of the book at the same time, it s easy to see how massive logistical errors can arise: The second group of 5 customers will have their credit cards charged, and will be told a book is shipping to them, when in reality none exists. They will have to call customer service to rectify this issue - which is a massive overhead operation cost to the business. Isolation means database transactions happen independently of one another. One database transaction must be closed and have completed all its changes before another one can begin. Sometimes, when a customer clicks purchase on Amazon, because of Isolation compliance standards, it can take seconds before the next page loads. That is not because of slow internet connection; it is due to ACID compliance standards to prevent logistical problems. Durability is a standard that measures if the changes will last even if problems occur on the system. If Amazon s database is located on a physical hard drive in Topeka, Kansas. Consider the example best-selling book sales again. As soon as the first five customers click submit, there are a power outage in Topeka and Amazon s server and 12

13 Chapter 3: Modern enterprise databases explained the database system crashes. Let s say Amazon has an efficient recovery system and the entire software system is up and running in less than 2 minutes after the crash. Will the changes from the first five database transactions still be there if the system crashes unexpectedly? If the system is ACID compliance, they will be. Postgres got its name from the post ingress orientation of how transactions are handled. MS SQL server databases MS SQL Server Databases is a Microsoft developed database system that is widely renowned for its reliability and high levels of data integrity and durability and other ACID compliance features. On implementation of MS SQL Server Databases is called Microsoft Jet, a file sharing system. No, it is not a p2p file sharing system used for stealing music. Much like Google Docs, multiple users can access the same file and read/write (according to that user s permission settings). What if two users open the same file at the same time? One person makes changes, and the other person makes and entirely different set of changes unrelated to the first. The first person saves his changes, and the next person saves his changes five minutes later. You might think the first person s changes will be overwritten, but not necessarily. Microsoft Jet has two fixes for this problem. The first is edit locking. Only allowing one person to have the file open for editing at one time. Before another person can open the file for editing, the first person must save their changes and close it out. Another solution that some files sharing systems use is called change merging. It will use advanced algorithms to take all of the changes from both users and merge them and create a new version of the file that contains all changes from all users. This is very common in version control system for developers who write computer code. Further distinctions between MS SQL Server Databases and other competitive systems include the methods of implementation. Just as C# differs from its open source competitor Java, MS SQL Server is very similar to MySQL server except Microsoft has added its distinctive touches to the coding practices and standards. These changes are for better or for worse depending on whom you ask. DB2 DB2 is a direct competitor to MS SQL server, offered by IBM. IBM is more focused on compatibility with other IBM services. Flexibility and compatibility with other components of your software solutions is one of the most important criteria. Thinking intuitively, using IBM cloud services mixed in with Microsoft database software tools might not make for a smooth operation of your total software solution. Three main criteria should be: reliability and functionality speed and efficiency compatibility with your other software solutions MySQL, the most popular choice, boasts limber speed and unmatched efficiency. Postgres offers structure and time-tested reliability. Microsoft s SQL Server and IBM s DB2 are designed to work in conjunction with other software solutions from their respective companies. NoSQL databases Enterprises with enormous computing operations around the world are facing increasingly heavy demands for database processing power and speed. Big Data is impacting organizations at every turn, and traditional SQL relational databases are beginning to crack. The solution for many organizations lies in NoSQL. While relational databases use tables for data storage and retrieval, NoSQL accesses structured and unstructured data with high fault-tolerance, which means it is designed to continue operating even if failures occur in the system. NoSQL was created to overcome problems relational databases have in scaling up to handle large amounts of read-and-write operations. SQL scales vertically, with all of the data on one server. NoSQL works horizontally, pulling data from large stores across multiple servers. This is why it can access significantly more data than SQL. To do this, there are four NoSQL database types: Types of NoSQL databases Column store Unlike the row-based system of relational databases, column databases keep certain values together in the same column, providing high-speed access for retrieval requests of the same data attributes. This is a good system for when you have aggregate queries on a field. This database method is ideal when you have lots of data, disk access, and highavailability. These criteria are why Facebook implemented Cassandra and Google came out with Bigtable. Another benefit is that column databases are happy running on multiple server clusters. Databases that can run on a single machine probably don t need the power of a column store database. Better alternatives in this situation are document store and key-value database implementations. Situations that are well-suited for column store databases include: When you need to write to the database every time When data is stored over many data centers in different locations Apps that employ dynamic fields When your program may eventually need to process immense amounts of data Use case scenarios that could potentially need this amount of processing power include: Search operations online Stock market and trading analysis Security analytics Social media Apache Accumulo and open-source Druid are examples of this type of NoSQL database. 13

14 Chapter 3: Modern enterprise databases explained Document store Data is stored as documents. These are not documents like from a word processor; rather, they are key-value pairs of strings, sequences of alpha or numeric characters. When flexibility is paramount, document stores shine. This is especially true when you need to store attributes along with the data. With this database type, you can also use embedded documents. This feature is beneficial when denormalizing, which is storing data in the same document if that data is frequently accessed. Document stores are perhaps the most widely implemented NoSQL database type because of their high performance, ease of deployment and flexibility. You ll see them used in many different scenarios including: Keeping track of various metadata types. Situations that can benefit from denormalization. Support for web entities with large volumes of disk access actions. Apps that use JSON. Data with variable attributes, such as parts, inventory or products. Examples of document stores are Couchbase, Lotus Notes, and Apache CouchDB. Cloud-based vendors include Microsoft Azure and Cloudant. Graph databases These databases are superior at working with interconnected data. Each database utilizes connections between nodes. Nodes are items you are keeping track of employees, accounts, sales and more. Edges are the interconnections between nodes and represent the relationships between them. Graph databases fly across these connections rapidly, but scalability is limited as all of the data must be located on one machine. Examples, where graph databases are used, might be freeway systems interconnecting states and towns, workers meeting with other staff in a company or modeling of molecular structures. These different elements have a relationship or connection of two instances of entities. Graph databases are ideal for problem domains such as: Business processes. Social media. Information technology infrastructure. Product or service recommendation engines. Permissions and access control. Each of these examples shows that graph databases are well-suited for modeling explicit relations. Sometimes graph databases are built as an add-on to other database systems such as column-store. Usually, this construct arises when there is a need for large-scale processing. Key/value databases Every key in this type of database is paired to a value. A traditional relational database uses key values to pull data. For example, a human resources manager might search employees based on the position, and the position field serves as the key. In key-value NoSQL, there are no fields - if there are modifications made, the entire value must be changed, exclusive of the key. This method scales well, but complex operations may be difficult. Key-value setups have simple queries. There are some databases in this category that are more sophisticated, but, in general, key-value databases do not have the query strengths of graph databases, document store, and column store. These databases are used for a myriad of use, cases including: Memory caches for relational databases. User data stores for mobile apps. Online shopping carts. Image, audio and other large file storage. Examples include Aerospike, MemcacheDB, and MUMPS. Cloud-based providers include Microsoft Azure and Amazon Web Services SimpleDB. Conclusion As the fast-moving world of database technology continues to evolve, NoSQL will become increasingly widespread. Databases are growing exponentially, and enterprises need creative solutions to processing challenges that SQL finds daunting. Just look at Facebook as an example. With over 1 billion members, their data demands are far and above anything that SQL can handle. In comparison, NoSQL can handle the volume and variety of data that these companies manages every day. Craigslist, for example, switched from MySQL to MongoDB and experienced much faster innovation and product development. On the other hand, Uber chose MySQL, and then had to modify it considerably to emulate a NoSQL database. SQL or NoSQL However, the rapid growth of NoSQL does not mean that it is right for your company or database challenges. The banking industry, for instance, continues to use relational databases that process millions of customer transactions every day. SQL is much better suited for this scenario. Likewise, NoSQL is not very good at non-real-time data warehousing. An old-school SQL application is better in this case. Carefully weighing your future growth, database demands and a need for flexibility and speed will indicate whether you should implement NoSQL. There are many advantages, but the drawbacks must be considered and steps taken to minimize their impact. In the next article, we ll look at the top-6 performance metrics to measure in enterprise databases and how to interpret them. 14

15 Chapter 4: Top six database performance metrics to monitor in enterprise applications

16 Chapter 4: Top six database performance metrics to monitor in enterprise applications The previous article presented an introduction SQL and NoSQL. This article builds on these topics by reviewing six of the top performance metrics to capture to assess the health of your database in your enterprise application. Specifically this article reviews the following: Average Baseline + 2 Standard Deviations Business Transactions Query Performance User and Query Conflicts Capacity Configuration NoSQL Databases Business transactions Business Transactions provide insight into real user behavior: they capture real-time performance that real users are experiencing as they interact with your application. As mentioned in the previous article, measuring the performance of a business transaction involves capturing the response time of a business transaction holistically as well as measuring the response times of its constituent tiers. These response times can then be compared with the baseline that best meets your business needs to determine normalcy. If you were to measure only a single aspect of your application, I would encourage you to measure the behavior of your business transactions. While container metrics can provide a wealth of information and can help you determine when to auto-scale your environment, your business transactions determine the performance of your application. Instead of asking for the CPU usage of your application server you should be asking whether or not your users can complete their business transactions and if those business transactions are behaving optimally. As a little background, business transactions are identified by their entry-point, which is the interaction with your application that starts the business transaction. Once a business transaction is defined, its performance is measured across your entire application ecosystem. The performance of each business transaction is evaluated against its baseline to assess normalcy. For example, we might determine that if the response time of the business transaction is slower than two standard deviations from the average response time for this baseline that it is behaving abnormally, as shown in figure 1. Figure 1 Evaluating BT response time against its baseline The baseline used to assess the business transaction is consistent for the hour in which the business transaction is running, but the business transaction is being refined by each business transaction execution. For example, if you have chosen a baseline that compares business transactions against the average response time for the hour of the day and the day of the week after the current hour is over, all business transactions executed in that hour will be incorporated into the baseline for next week. Through this mechanism an application can evolve over time without requiring the original baseline to be thrown away and rebuilt; you can consider it as a window moving over time. In summary, business transactions are the most reflective measurement of the user experience, so they are the most important metric to capture. Query performance Business Transaction Response Time Greater than 2 SDs: Alert The most obvious place to look for poor query performance is in the query itself. Problems can result from queries that take too long to identify the required data or bring the data back. Look for these issues in queries: Selecting more data than needed It is not enough to write queries that return the appropriate rows; queries that return too many columns can cause slowness both in selecting the rows and retrieving the data. It is better to list the required columns rather than writing SELECT*. When the query is based on selecting specific fields, the plan may identify a covering index, which can speed up the results. A covering index includes all the fields used in the query. This means that the database can generate the results just from the index. It does not need to go to the underlying table to build the result. Additionally, listing the columns required in the result reduces the data that s transmitted, which also benefits performance. 16

Top five Node.js performance metrics, tips and tricks

Top five Node.js performance metrics, tips and tricks Top five Node.js performance metrics, tips and tricks Top five Node.js performance metrics, tips and tricks Table of Contents Chapter 1: Getting started with APM... 3 Chapter 2: Challenges in implementing

More information

Top 5.NET Metrics, Tips & Tricks

Top 5.NET Metrics, Tips & Tricks Top 5.NET Metrics, Tips & Tricks Top 5.NET Metrics, Tips & Tricks Table of contents Chapter 1... 3 Getting Started with APM...4 Chapter 2...8 Challenges in Implementing an APM Strategy...9 Chapter 3...13

More information

Top five Docker performance tips

Top five Docker performance tips Top five Docker performance tips Top five Docker performance tips Table of Contents Introduction... 3 Tip 1: Design design applications as microservices... 5 Tip 2: Deployment deploy Docker components

More information

When, Where & Why to Use NoSQL?

When, Where & Why to Use NoSQL? When, Where & Why to Use NoSQL? 1 Big data is becoming a big challenge for enterprises. Many organizations have built environments for transactional data with Relational Database Management Systems (RDBMS),

More information

Next Generation End User Experience Management:

Next Generation End User Experience Management: Next Generation End User Experience Management: Application Performance Management with Deep Network Performance Insight An AppNeta White Paper 800.508.5233 PathView_WebSales@appneta.com www.appneta.com

More information

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and

More information

Mission-Critical Customer Service. 10 Best Practices for Success

Mission-Critical  Customer Service. 10 Best Practices for Success Mission-Critical Email Customer Service 10 Best Practices for Success Introduction When soda cans and chocolate wrappers start carrying email contact information, you know that email-based customer service

More information

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight

Abstract. The Challenges. ESG Lab Review InterSystems IRIS Data Platform: A Unified, Efficient Data Platform for Fast Business Insight ESG Lab Review InterSystems Data Platform: A Unified, Efficient Data Platform for Fast Business Insight Date: April 218 Author: Kerry Dolan, Senior IT Validation Analyst Abstract Enterprise Strategy Group

More information

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives,

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives, Virtualization Q&A with an industry leader Virtualization is rapidly becoming a fact of life for agency executives, as the basis for data center consolidation and cloud computing and, increasingly, as

More information

THE FUTURE OF BUSINESS DEPENDS ON SOFTWARE DEFINED STORAGE (SDS)

THE FUTURE OF BUSINESS DEPENDS ON SOFTWARE DEFINED STORAGE (SDS) THE FUTURE OF BUSINESS DEPENDS ON SOFTWARE DEFINED STORAGE (SDS) How SSDs can fit into and accelerate an SDS strategy SPONSORED BY TABLE OF CONTENTS Introduction 3 An Overview of SDS 4 Achieving the Goals

More information

The Future of Business Depends on Software Defined Storage (SDS) How SSDs can fit into and accelerate an SDS strategy

The Future of Business Depends on Software Defined Storage (SDS) How SSDs can fit into and accelerate an SDS strategy The Future of Business Depends on Software Defined Storage (SDS) Table of contents Introduction 2 An Overview of SDS 3 Achieving the Goals of SDS Hinges on Smart Hardware Decisions 5 Assessing the Role

More information

Consolidating servers, storage, and incorporating virtualization allowed this publisher to expand with confidence in a challenging industry climate.

Consolidating servers, storage, and incorporating virtualization allowed this publisher to expand with confidence in a challenging industry climate. ENGINEERED SOLUTIONS A PUBLISHING SUCCESS STORY DOING MORE WITH LESS Consolidating servers, storage, and incorporating virtualization allowed this publisher to expand with confidence in a challenging industry

More information

Copyright 2018, Oracle and/or its affiliates. All rights reserved.

Copyright 2018, Oracle and/or its affiliates. All rights reserved. Beyond SQL Tuning: Insider's Guide to Maximizing SQL Performance Monday, Oct 22 10:30 a.m. - 11:15 a.m. Marriott Marquis (Golden Gate Level) - Golden Gate A Ashish Agrawal Group Product Manager Oracle

More information

An Introduction to Big Data Formats

An Introduction to Big Data Formats Introduction to Big Data Formats 1 An Introduction to Big Data Formats Understanding Avro, Parquet, and ORC WHITE PAPER Introduction to Big Data Formats 2 TABLE OF TABLE OF CONTENTS CONTENTS INTRODUCTION

More information

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS TUTORIAL: WHITE PAPER VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS 1 1. Introduction The Critical Mid-Tier... 3 2. Performance Challenges of J2EE Applications... 3

More information

Diagnostics in Testing and Performance Engineering

Diagnostics in Testing and Performance Engineering Diagnostics in Testing and Performance Engineering This document talks about importance of diagnostics in application testing and performance engineering space. Here are some of the diagnostics best practices

More information

Oracle and Tangosol Acquisition Announcement

Oracle and Tangosol Acquisition Announcement Oracle and Tangosol Acquisition Announcement March 23, 2007 The following is intended to outline our general product direction. It is intended for information purposes only, and may

More information

Evaluating Cloud Databases for ecommerce Applications. What you need to grow your ecommerce business

Evaluating Cloud Databases for ecommerce Applications. What you need to grow your ecommerce business Evaluating Cloud Databases for ecommerce Applications What you need to grow your ecommerce business EXECUTIVE SUMMARY ecommerce is the future of not just retail but myriad industries from telecommunications

More information

Scaling DreamFactory

Scaling DreamFactory Scaling DreamFactory This white paper is designed to provide information to enterprise customers about how to scale a DreamFactory Instance. The sections below talk about horizontal, vertical, and cloud

More information

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE.

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. Is putting Contact us INTRODUCTION You know the headaches of managing an infrastructure that is stretched to its limit. Too little staff. Too many users. Not

More information

Improving the ROI of Your Data Warehouse

Improving the ROI of Your Data Warehouse Improving the ROI of Your Data Warehouse Many organizations are struggling with a straightforward but challenging problem: their data warehouse can t affordably house all of their data and simultaneously

More information

The Keys to Monitoring Internal Web Applications

The Keys to Monitoring Internal Web Applications WHITEPAPER The Keys to Monitoring Internal Web Applications Much of the focus on applications today revolves around SaaS apps delivered from the cloud. However, many large enterprises are also required

More information

DATABASE SCALE WITHOUT LIMITS ON AWS

DATABASE SCALE WITHOUT LIMITS ON AWS The move to cloud computing is changing the face of the computer industry, and at the heart of this change is elastic computing. Modern applications now have diverse and demanding requirements that leverage

More information

Data Replication Buying Guide

Data Replication Buying Guide Data Replication Buying Guide 1 How to Choose a Data Replication Solution IT professionals are increasingly turning to heterogenous data replication to modernize data while avoiding the costs and risks

More information

STATE OF MODERN APPLICATIONS IN THE CLOUD

STATE OF MODERN APPLICATIONS IN THE CLOUD STATE OF MODERN APPLICATIONS IN THE CLOUD 2017 Introduction The Rise of Modern Applications What is the Modern Application? Today s leading enterprises are striving to deliver high performance, highly

More information

Optimizing Testing Performance With Data Validation Option

Optimizing Testing Performance With Data Validation Option Optimizing Testing Performance With Data Validation Option 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording

More information

BECOME A LOAD TESTING ROCK STAR

BECOME A LOAD TESTING ROCK STAR 3 EASY STEPS TO BECOME A LOAD TESTING ROCK STAR Replicate real life conditions to improve application quality Telerik An Introduction Software load testing is generally understood to consist of exercising

More information

JAVASCRIPT CHARTING. Scaling for the Enterprise with Metric Insights Copyright Metric insights, Inc.

JAVASCRIPT CHARTING. Scaling for the Enterprise with Metric Insights Copyright Metric insights, Inc. JAVASCRIPT CHARTING Scaling for the Enterprise with Metric Insights 2013 Copyright Metric insights, Inc. A REVOLUTION IS HAPPENING... 3! Challenges... 3! Borrowing From The Enterprise BI Stack... 4! Visualization

More information

VirtualWisdom â ProbeNAS Brief

VirtualWisdom â ProbeNAS Brief TECH BRIEF VirtualWisdom â ProbeNAS Brief Business Drivers and Business Value for VirtualWisdom Infrastructure is expensive, costly to maintain, and often difficult to scale. While transitioning to virtualized

More information

Introduction to the Active Everywhere Database

Introduction to the Active Everywhere Database Introduction to the Active Everywhere Database INTRODUCTION For almost half a century, the relational database management system (RDBMS) has been the dominant model for database management. This more than

More information

Real-Time Insights from the Source

Real-Time Insights from the Source LATENCY LATENCY LATENCY Real-Time Insights from the Source This white paper provides an overview of edge computing, and how edge analytics will impact and improve the trucking industry. What Is Edge Computing?

More information

WEBSITE & CLOUD PERFORMANCE ANALYSIS. Evaluating Cloud Performance for Web Site Hosting Requirements

WEBSITE & CLOUD PERFORMANCE ANALYSIS. Evaluating Cloud Performance for Web Site Hosting Requirements WEBSITE & CLOUD PERFORMANCE ANALYSIS Evaluating Cloud Performance for Web Site Hosting Requirements WHY LOOK AT PERFORMANCE? There are many options for Web site hosting services, with most vendors seemingly

More information

Massive Scalability With InterSystems IRIS Data Platform

Massive Scalability With InterSystems IRIS Data Platform Massive Scalability With InterSystems IRIS Data Platform Introduction Faced with the enormous and ever-growing amounts of data being generated in the world today, software architects need to pay special

More information

A comparison of UKCloud s platform against other public cloud providers

A comparison of UKCloud s platform against other public cloud providers Pure commitment. A comparison of UKCloud s platform against other public cloud providers version 1.0 (Based upon August 2017 data) The evolution of UKCloud UKCloud has long been known for its VMware powered

More information

A Single Source of Truth

A Single Source of Truth A Single Source of Truth is it the mythical creature of data management? In the world of data management, a single source of truth is a fully trusted data source the ultimate authority for the particular

More information

2018 Database DevOps Survey DBmaestro 1

2018 Database DevOps Survey DBmaestro 1 2018 Database DevOps Survey 2017 DBmaestro 1 Table of Contents Executive Summary... 3 What Percentage of IT Projects in Your Company Use a DevOps Approach?... 4 Integration of DBAs with DevOps Teams...

More information

NewSQL Without Compromise

NewSQL Without Compromise NewSQL Without Compromise Everyday businesses face serious challenges coping with application performance, maintaining business continuity, and gaining operational intelligence in real- time. There are

More information

Introduction to Big Data. NoSQL Databases. Instituto Politécnico de Tomar. Ricardo Campos

Introduction to Big Data. NoSQL Databases. Instituto Politécnico de Tomar. Ricardo Campos Instituto Politécnico de Tomar Introduction to Big Data NoSQL Databases Ricardo Campos Mestrado EI-IC Análise e Processamento de Grandes Volumes de Dados Tomar, Portugal, 2016 Part of the slides used in

More information

Optimize Your Databases Using Foglight for Oracle s Performance Investigator

Optimize Your Databases Using Foglight for Oracle s Performance Investigator Optimize Your Databases Using Foglight for Oracle s Performance Investigator Solve performance issues faster with deep SQL workload visibility and lock analytics Abstract Get all the information you need

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

Maximizing Fraud Prevention Through Disruptive Architectures Delivering speed at scale.

Maximizing Fraud Prevention Through Disruptive Architectures Delivering speed at scale. Maximizing Fraud Prevention Through Disruptive Architectures Delivering speed at scale. January 2016 Credit Card Fraud prevention is among the most time-sensitive and high-value of IT tasks. The databases

More information

The importance of monitoring containers

The importance of monitoring containers The importance of monitoring containers The container achilles heel As the containerization market skyrockets, with DevOps and continuous delivery as its jet fuel, organizations are trading one set of

More information

Document Sub Title. Yotpo. Technical Overview 07/18/ Yotpo

Document Sub Title. Yotpo. Technical Overview 07/18/ Yotpo Document Sub Title Yotpo Technical Overview 07/18/2016 2015 Yotpo Contents Introduction... 3 Yotpo Architecture... 4 Yotpo Back Office (or B2B)... 4 Yotpo On-Site Presence... 4 Technologies... 5 Real-Time

More information

Identifying Workloads for the Cloud

Identifying Workloads for the Cloud Identifying Workloads for the Cloud 1 This brief is based on a webinar in RightScale s I m in the Cloud Now What? series. Browse our entire library for webinars on cloud computing management. Meet our

More information

A Better Approach to Leveraging an OpenStack Private Cloud. David Linthicum

A Better Approach to Leveraging an OpenStack Private Cloud. David Linthicum A Better Approach to Leveraging an OpenStack Private Cloud David Linthicum A Better Approach to Leveraging an OpenStack Private Cloud 1 Executive Summary The latest bi-annual survey data of OpenStack users

More information

TECHNOLOGY WHITE PAPER. Java for the Real Time Business

TECHNOLOGY WHITE PAPER. Java for the Real Time Business TECHNOLOGY WHITE PAPER Executive Summary The emerging Real Time Business Imperative means your business now must leverage new technologies and high volumes of data to deliver insight, capability and value

More information

Embedded Technosolutions

Embedded Technosolutions Hadoop Big Data An Important technology in IT Sector Hadoop - Big Data Oerie 90% of the worlds data was generated in the last few years. Due to the advent of new technologies, devices, and communication

More information

The Modern Manufacturer s Guide to. Industrial Wireless Cisco and/or its affiliates. All rights reserved.

The Modern Manufacturer s Guide to. Industrial Wireless Cisco and/or its affiliates. All rights reserved. The Modern Manufacturer s Guide to Industrial Wireless 2017 Cisco and/or its affiliates. All rights reserved. The Modern Manufacturer s Guide to Industrial Wireless Page 2 It s hard to imagine an effective

More information

Crash Course in Modernization. A whitepaper from mrc

Crash Course in Modernization. A whitepaper from mrc Crash Course in Modernization A whitepaper from mrc Introduction Modernization is a confusing subject for one main reason: It isn t the same across the board. Different vendors sell different forms of

More information

Middle East Technical University. Jeren AKHOUNDI ( ) Ipek Deniz Demirtel ( ) Derya Nur Ulus ( ) CENG553 Database Management Systems

Middle East Technical University. Jeren AKHOUNDI ( ) Ipek Deniz Demirtel ( ) Derya Nur Ulus ( ) CENG553 Database Management Systems Middle East Technical University Jeren AKHOUNDI (1836345) Ipek Deniz Demirtel (1997691) Derya Nur Ulus (1899608) CENG553 Database Management Systems * Introduction to Cloud Computing * Cloud DataBase as

More information

BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE

BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE E-Guide BUYING SERVER HARDWARE FOR A SCALABLE VIRTUAL INFRASTRUCTURE SearchServer Virtualization P art 1 of this series explores how trends in buying server hardware have been influenced by the scale-up

More information

2012 Microsoft Corporation. All rights reserved. Microsoft, Active Directory, Excel, Lync, Outlook, SharePoint, Silverlight, SQL Server, Windows,

2012 Microsoft Corporation. All rights reserved. Microsoft, Active Directory, Excel, Lync, Outlook, SharePoint, Silverlight, SQL Server, Windows, 2012 Microsoft Corporation. All rights reserved. Microsoft, Active Directory, Excel, Lync, Outlook, SharePoint, Silverlight, SQL Server, Windows, Windows Server, and other product names are or may be registered

More information

Datacenter Care HEWLETT PACKARD ENTERPRISE. Key drivers of an exceptional NPS score

Datacenter Care HEWLETT PACKARD ENTERPRISE. Key drivers of an exceptional NPS score Datacenter Care The things I love about Datacenter Care is the a la carte nature of the offering. The contract is really flexible and the services delivered correspond exactly to what we bought. The contract

More information

NINE MYTHS ABOUT. DDo S PROTECTION

NINE MYTHS ABOUT. DDo S PROTECTION NINE S ABOUT DDo S PROTECTION NINE S ABOUT DDOS PROTECTION The trajectory of DDoS attacks is clear: yearly increases in total DDoS attacks, an ever-growing number of attack vectors, and billions of potentially

More information

The data quality trends report

The data quality trends report Report The 2015 email data quality trends report How organizations today are managing and using email Table of contents: Summary...1 Research methodology...1 Key findings...2 Email collection and database

More information

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered Over the last decade, cloud backup, recovery and restore (BURR) options have emerged

More information

CIO Guide: Disaster recovery solutions that work. Making it happen with Azure in the public cloud

CIO Guide: Disaster recovery solutions that work. Making it happen with Azure in the public cloud CIO Guide: Disaster recovery solutions that work Making it happen with Azure in the public cloud Consult Build Transform Support When you re considering a shift to Disaster Recovery as a service (DRaaS),

More information

IBM s Integrated Data Management Solutions for the DBA

IBM s Integrated Data Management Solutions for the DBA Information Management IBM s Integrated Data Management Solutions for the DBA Stop Stressing and Start Automating! Agenda Daily Woes: Trials and tribulations of the DBA Business Challenges: Beyond the

More information

The Power of the Crowd

The Power of the Crowd WHITE PAPER The Power of the Crowd SUMMARY With the shift to Software-as-a-Service and Cloud nearly complete, organizations can optimize their end user experience and network operations with the power

More information

AppEnsure s End User Centric APM Product Overview

AppEnsure s End User Centric APM Product Overview AppEnsure s End User Centric APM Product Overview January 2017 2017 AppEnsure Inc. Overview AppEnsure provides application performance management (APM) for IT Operations to proactively provide the best

More information

Up and Running Software The Development Process

Up and Running Software The Development Process Up and Running Software The Development Process Success Determination, Adaptative Processes, and a Baseline Approach About This Document: Thank you for requesting more information about Up and Running

More information

Distributed Architectures & Microservices. CS 475, Spring 2018 Concurrent & Distributed Systems

Distributed Architectures & Microservices. CS 475, Spring 2018 Concurrent & Distributed Systems Distributed Architectures & Microservices CS 475, Spring 2018 Concurrent & Distributed Systems GFS Architecture GFS Summary Limitations: Master is a huge bottleneck Recovery of master is slow Lots of success

More information

ORACLE DEPLOYMENT DECISION GUIDE: COMPARING YOUR DEPLOYMENT OPTIONS

ORACLE DEPLOYMENT DECISION GUIDE: COMPARING YOUR DEPLOYMENT OPTIONS ORACLE DEPLOYMENT DECISION GUIDE: COMPARING YOUR DEPLOYMENT OPTIONS Oracle Database Appliance (ODA) Oracle Exadata Commodity x86 Servers Public Cloud IBM POWER DIY Vendita Database Cloud Server (DCS) IBM

More information

Understanding Managed Services

Understanding Managed Services Understanding Managed Services The buzzword relating to IT Support is Managed Services, and every day more and more businesses are jumping on the bandwagon. But what does managed services actually mean

More information

Perfect Balance of Public and Private Cloud

Perfect Balance of Public and Private Cloud Perfect Balance of Public and Private Cloud Delivered by Fujitsu Introducing A unique and flexible range of services, designed to make moving to the public cloud fast and easier for your business. These

More information

Enabling Performance & Stress Test throughout the Application Lifecycle

Enabling Performance & Stress Test throughout the Application Lifecycle Enabling Performance & Stress Test throughout the Application Lifecycle March 2010 Poor application performance costs companies millions of dollars and their reputation every year. The simple challenge

More information

AN APPLICATION-CENTRIC APPROACH TO DATA CENTER MIGRATION

AN APPLICATION-CENTRIC APPROACH TO DATA CENTER MIGRATION WHITE PAPER AN APPLICATION-CENTRIC APPROACH TO DATA CENTER MIGRATION Five key success factors Abstract IT organizations today are under constant business pressure to transform their infrastructure to reduce

More information

CLIENT SERVER ARCHITECTURE:

CLIENT SERVER ARCHITECTURE: CLIENT SERVER ARCHITECTURE: Client-Server architecture is an architectural deployment style that describe the separation of functionality into layers with each segment being a tier that can be located

More information

WHITEPAPER MOVING TO A NEW BUSINESS PHONE SYSTEM

WHITEPAPER MOVING TO A NEW BUSINESS PHONE SYSTEM WHITEPAPER MOVING TO A NEW BUSINESS PHONE SYSTEM Introduction Phone systems have been installed in offices of all different sizes for more than 40 years, providing a vital service to the business. Since

More information

JELASTIC PLATFORM-AS-INFRASTRUCTURE

JELASTIC PLATFORM-AS-INFRASTRUCTURE JELASTIC PLATFORM-AS-INFRASTRUCTURE Jelastic provides enterprise cloud software that redefines the economics of cloud deployment and management. We deliver Platform-as-Infrastructure: bringing together

More information

TWOO.COM CASE STUDY CUSTOMER SUCCESS STORY

TWOO.COM CASE STUDY CUSTOMER SUCCESS STORY TWOO.COM CUSTOMER SUCCESS STORY With over 30 million users, Twoo.com is Europe s leading social discovery site. Twoo runs the world s largest scale-out SQL deployment, with 4.4 billion transactions a day

More information

ACL Interpretive Visual Remediation

ACL Interpretive Visual Remediation January 2016 ACL Interpretive Visual Remediation Innovation in Internal Control Management SOLUTIONPERSPECTIVE Governance, Risk Management & Compliance Insight 2015 GRC 20/20 Research, LLC. All Rights

More information

CLOUDALLY EBOOK. Best Practices for Business Continuity

CLOUDALLY EBOOK. Best Practices for Business Continuity CLOUDALLY EBOOK 8 Disaster Recovery Best Practices for Business Continuity Introduction Disaster can strike at any moment, and it s impossible to plan for every eventuality. When Hurricane Katrina hit

More information

SQL Azure as a Self- Managing Database Service: Lessons Learned and Challenges Ahead

SQL Azure as a Self- Managing Database Service: Lessons Learned and Challenges Ahead SQL Azure as a Self- Managing Database Service: Lessons Learned and Challenges Ahead 1 Introduction Key features: -Shared nothing architecture -Log based replication -Support for full ACID properties -Consistency

More information

ITIL Event Management in the Cloud

ITIL Event Management in the Cloud ITIL Event Management in the Cloud An AWS Cloud Adoption Framework Addendum January 2017 2017, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational

More information

SaaS Providers. ThousandEyes for. Summary

SaaS Providers. ThousandEyes for. Summary USE CASE ThousandEyes for SaaS Providers Summary With Software-as-a-Service (SaaS) applications rapidly replacing onpremise solutions, the onus of ensuring a great user experience for these applications

More information

Balancing the pressures of a healthcare SQL Server DBA

Balancing the pressures of a healthcare SQL Server DBA Balancing the pressures of a healthcare SQL Server DBA More than security, compliance and auditing? Working with SQL Server in the healthcare industry presents many unique challenges. The majority of these

More information

NOSQL OPERATIONAL CHECKLIST

NOSQL OPERATIONAL CHECKLIST WHITEPAPER NOSQL NOSQL OPERATIONAL CHECKLIST NEW APPLICATION REQUIREMENTS ARE DRIVING A DATABASE REVOLUTION There is a new breed of high volume, highly distributed, and highly complex applications that

More information

BUILD BETTER MICROSOFT SQL SERVER SOLUTIONS Sales Conversation Card

BUILD BETTER MICROSOFT SQL SERVER SOLUTIONS Sales Conversation Card OVERVIEW SALES OPPORTUNITY Lenovo Database Solutions for Microsoft SQL Server bring together the right mix of hardware infrastructure, software, and services to optimize a wide range of data warehouse

More information

Diagnosing the cause of poor application performance

Diagnosing the cause of poor application performance Diagnosing the cause of poor application performance When it comes to troubleshooting application performance issues, there are two steps you can take to make diagnosis easier, faster and more accurate.

More information

Record-Level Access: Under the Hood

Record-Level Access: Under the Hood Record-Level Access: Under the Hood Salesforce, Winter 18 @salesforcedocs Last updated: November 2, 2017 Copyright 2000 2017 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark

More information

IBM POWER SYSTEMS: YOUR UNFAIR ADVANTAGE

IBM POWER SYSTEMS: YOUR UNFAIR ADVANTAGE IBM POWER SYSTEMS: YOUR UNFAIR ADVANTAGE Choosing IT infrastructure is a crucial decision, and the right choice will position your organization for success. IBM Power Systems provides an innovative platform

More information

Continuous Processing versus Oracle RAC: An Analyst s Review

Continuous Processing versus Oracle RAC: An Analyst s Review Continuous Processing versus Oracle RAC: An Analyst s Review EXECUTIVE SUMMARY By Dan Kusnetzky, Distinguished Analyst Most organizations have become so totally reliant on information technology solutions

More information

Progress DataDirect For Business Intelligence And Analytics Vendors

Progress DataDirect For Business Intelligence And Analytics Vendors Progress DataDirect For Business Intelligence And Analytics Vendors DATA SHEET FEATURES: Direction connection to a variety of SaaS and on-premises data sources via Progress DataDirect Hybrid Data Pipeline

More information

Seminar report Google App Engine Submitted in partial fulfillment of the requirement for the award of degree Of CSE

Seminar report Google App Engine Submitted in partial fulfillment of the requirement for the award of degree Of CSE A Seminar report On Google App Engine Submitted in partial fulfillment of the requirement for the award of degree Of CSE SUBMITTED TO: SUBMITTED BY: www.studymafia.org www.studymafia.org Acknowledgement

More information

The security challenge in a mobile world

The security challenge in a mobile world The security challenge in a mobile world Contents Executive summary 2 Executive summary 3 Controlling devices and data from the cloud 4 Managing mobile devices - Overview - How it works with MDM - Scenario

More information

The Data Explosion. A Guide to Oracle s Data-Management Cloud Services

The Data Explosion. A Guide to Oracle s Data-Management Cloud Services The Data Explosion A Guide to Oracle s Data-Management Cloud Services More Data, More Data Everyone knows about the data explosion. 1 And the challenges it presents to businesses large and small. No wonder,

More information

Built for Speed: Comparing Panoply and Amazon Redshift Rendering Performance Utilizing Tableau Visualizations

Built for Speed: Comparing Panoply and Amazon Redshift Rendering Performance Utilizing Tableau Visualizations Built for Speed: Comparing Panoply and Amazon Redshift Rendering Performance Utilizing Tableau Visualizations Table of contents Faster Visualizations from Data Warehouses 3 The Plan 4 The Criteria 4 Learning

More information

BENCHMARK: PRELIMINARY RESULTS! JUNE 25, 2014!

BENCHMARK: PRELIMINARY RESULTS! JUNE 25, 2014! BENCHMARK: PRELIMINARY RESULTS JUNE 25, 2014 Our latest benchmark test results are in. The detailed report will be published early next month, but after 6 weeks of designing and running these tests we

More information

Cisco APIC Enterprise Module Simplifies Network Operations

Cisco APIC Enterprise Module Simplifies Network Operations Cisco APIC Enterprise Module Simplifies Network Operations October 2015 Prepared by: Zeus Kerravala Cisco APIC Enterprise Module Simplifies Network Operations by Zeus Kerravala October 2015 º º º º º º

More information

Moving to a New Business Phone System

Moving to a New Business Phone System Moving to a New Business Phone System BroadSoft White Paper OneCloudNetworks is an authorized BroadSoft Service Provider 2015 BroadSoft. All Rights Reserved. Introduction Phone systems have been installed

More information

Java Without the Jitter

Java Without the Jitter TECHNOLOGY WHITE PAPER Achieving Ultra-Low Latency Table of Contents Executive Summary... 3 Introduction... 4 Why Java Pauses Can t Be Tuned Away.... 5 Modern Servers Have Huge Capacities Why Hasn t Latency

More information

e BOOK Do you feel trapped by your database vendor? What you can do to take back control of your database (and its associated costs!

e BOOK Do you feel trapped by your database vendor? What you can do to take back control of your database (and its associated costs! e BOOK Do you feel trapped by your database vendor? What you can do to take back control of your database (and its associated costs!) With private and hybrid cloud infrastructures now reaching critical

More information

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS.

GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. GET CLOUD EMPOWERED. SEE HOW THE CLOUD CAN TRANSFORM YOUR BUSINESS. Cloud computing is as much a paradigm shift in data center and IT management as it is a culmination of IT s capacity to drive business

More information

Rule 14 Use Databases Appropriately

Rule 14 Use Databases Appropriately Rule 14 Use Databases Appropriately Rule 14: What, When, How, and Why What: Use relational databases when you need ACID properties to maintain relationships between your data. For other data storage needs

More information

Building a Data Strategy for a Digital World

Building a Data Strategy for a Digital World Building a Data Strategy for a Digital World Jason Hunter, CTO, APAC Data Challenge: Pushing the Limits of What's Possible The Art of the Possible Multiple Government Agencies Data Hub 100 s of Service

More information

Chapter 5. Database Processing

Chapter 5. Database Processing Chapter 5 Database Processing No, Drew, You Don t Know Anything About Creating Queries." AllRoad Parts operational database used to determine which parts to consider for 3D printing. If Addison and Drew

More information

IBM dashdb Local. Using a software-defined environment in a private cloud to enable hybrid data warehousing. Evolving the data warehouse

IBM dashdb Local. Using a software-defined environment in a private cloud to enable hybrid data warehousing. Evolving the data warehouse IBM dashdb Local Using a software-defined environment in a private cloud to enable hybrid data warehousing Evolving the data warehouse Managing a large-scale, on-premises data warehouse environments to

More information

Networking for a smarter data center: Getting it right

Networking for a smarter data center: Getting it right IBM Global Technology Services October 2011 Networking for a smarter data center: Getting it right Planning the network needed for a dynamic infrastructure 2 Networking for a smarter data center: Getting

More information

5 reasons why choosing Apache Cassandra is planning for a multi-cloud future

5 reasons why choosing Apache Cassandra is planning for a multi-cloud future White Paper 5 reasons why choosing Apache Cassandra is planning for a multi-cloud future Abstract We have been hearing for several years now that multi-cloud deployment is something that is highly desirable,

More information