CODENOMICON BEST PRACTICES. Making the Most Out of Your Robustness and Security Testing using DEFENSICS

Size: px
Start display at page:

Download "CODENOMICON BEST PRACTICES. Making the Most Out of Your Robustness and Security Testing using DEFENSICS"

Transcription

1 CODENOMICON BEST PRACTICES Making the Most Out of Your Robustness and Security Testing using DEFENSICS Version 1.1, June 1, 2007 Delivered: 3 November 2006 Page 1 Codenomicon Best Practices

2 1. INTRODUCTION This document presents a set of process practices for applying Codenomicon DEFENSICS robustness testing solutions throughout your development lifecycle. It is intended as a collection of solid process ideas that should be applicable for different audiences and organizations. Since every organization is unique, the contents of this document should be applied in a way that is appropriate for your specific needs. We do not presume to know your product development practices better that you, but we have seen up close how our other customers have applied Codenomicon tests and learned some lessons from that. This document is an encapsulation of that knowledge, and it is our sincere hope that you find some good and useful things in it. The rest of the first chapter introduces software security and Codenomicon robustness testing. The second chapter goes on to discuss how you can integrate the tests into your development lifecycle. It also describes potential differences in usage by potential different user groups as well as introduces some other testing disciplines and practices that may be used to augment the results of Codenomicon tests. The third chapter explains test tool basic usage. The fourth chapter provides additional tips for advanced usage, the fifth contains pointers for additional information, and the sixth presents a short conclusion. Finally, Appendix A presents a template for a testing checklist which you can easily modify to fit your specific needs, Appendix B provides you with a quick template for recording your test results, and Appendix C contains a short glossary of terms SOFTWARE SECURITY Nowadays, security problems plague the products used to operate the Internet and other global networks as well as the products used to access these networks. Denial of service attacks against core network infrastructure components occur on a daily basis. Operating systems, web browsers, and programs all have had their share of reported security problems. A significant portion of these vulnerabilities are simple robustness problems caused by careless or misguided programming. The Internet underground community searches for these flaws using non-systematic ad-hoc methods and publishes their results for fun and profit. A large number of reported problems in some popular software packages can be explained by the huge attention they have received, but also on the other hand by the numerous flaws they clearly contain. Although reports of serious damages caused by exploitation of these vulnerabilities are sparse, they pose a threat to the networked society. Security assessment of software by source code auditing is expensive and laborious. Methods for security analysis without access to the source code have been few and usually limited in scope. This may be one reason why many major software vendors have Page 2 Codenomicon Best Practices

3 been stuck in the loop of fixing vulnerabilities that have been found in the wild and providing countless patches to their clients to keep the systems protected ROBUSTNESS TESTING Since 2001, Codenomicon has been pioneering the creation and adoption of systematic robustness testing methods and tools for improving implementation-level security. A method devised originally in the industry-proven PROTOS project in the Oulu University Secure Programming Group (OUSPG) in , robustness testing is based on the systematic creation of a very large number of protocol messages (tens or hundreds of thousands) that contain exceptional elements simulating malicious attacks. This method provides a low-cost, proactive way to assess software robustness and to improve its security. The security assessment of a software component is based on a robustness analysis of the component. Robustness is defined as the ability of software to tolerate exceptional input and stressful environment conditions. A piece of software which is not robust fails when facing such circumstances. A malicious intruder can easily take advantage of robustness shortcomings to compromise the system running the software. In fact, a large portion of information security vulnerabilities reported in the public are caused by robustness weaknesses. All robustness problems can be exploited by causing denialof-service conditions by feeding the vulnerable component with maliciously formatted inputs. Certain types of robustness flaws (e.g. common buffer overflows) can be exploited to run externally supplied code in the vulnerable component. In addition to increased information security, software robustness promotes software quality in general. A robust piece of software has less bugs, which in turn increases user satisfaction and provides better uptime for the systems running the software. Proactive robustness analysis provides tools for assessing software quality as a complementary method with traditional process-based quality systems and code audits. Robustness weaknesses are introduced during the programming (implementation) phase of the vulnerable software component. These kinds of errors easily slip through ordinary code auditing and testing since robustness problems generally do not manifest themselves during normal operations. They become visible only when someone or something presents the implementation with a carefully constructed malicious piece of input, or corrupted data. Note that the malicious input may not always be made by design; over the years countless problems have resulted from one buggy implementation sending broken data to another, i.e. problems in interoperability. The Codenomicon method of robustness analysis is based on the systematic creation Page 3 Codenomicon Best Practices

4 of a large set of exceptional input data that is fed into the tested component. The number of input data units causing problems provides a quantitative figure about the robustness, overall quality and implementation-level security of the tested component. As a short-term benefit, the robustness analysis process provides information to estimate the maturity of the tested software component and to fix the immediate problems that are found. In the long run, this kind of assessment provides strong, quantitative quality metrics on your products and development processes, and serves to promote awareness on the importance of solid programming practices among your development units and subcontractors. Also, the test tools are likely to find out various vulnerabilities that would be found by the underground community searching for vulnerabilities for fun and profit. All this results in fewer information security problems to be found and reported after shipment of the assessed system. See for more information of robustness testing and its applications LIMITATIONS The usual limitations for black-box testing and testing in general apply also to robustness analysis. Passing the analysis is in no way a certificate for a vulnerabilityfree system. A complete security audit of a system requires many actions besides robustness analysis and must cover the design of the system in general, its usability, and other aspects. Robustness analysis can be used as an integral part of a complete audit, or as a standalone method to provide insight into the security and quality of the tested software component. In creating robustness test cases for any protocol, the number of different possible inputs is always infinite. For this reason, a subset of inputs that covers the largest possible number of different protocol messages and elements with the best possible test efficiency must be carefully selected and created. This means that some portions of the system under test will necessarily remain unexercised. Any available tools for code coverage metrics can be used to provide information about the proportion of the software statements covered by the tests. We also recommend augmenting the robustness analysis process by improving the overall quality of your product development lifecycle. Some mechanisms for this include increasing awareness of security problems throughout your entire organization, encouraging secure programming practices, promoting the need for quality at every phase during the product development lifecycle, and using all available tools for improving code Page 4 Codenomicon Best Practices

5 quality. The latter category could include robustness testing, automated source code analysis, manual code auditing, and finally establishing stable processes for handling potential incidents later on during the life of your product EXPECTED RESULTS The software vulnerabilities that are found through the use of Codenomicon tests are likely to be robustness problems caused by implementation-time mistakes (i.e. mistakes made during programming). Many of these mistakes are vulnerabilities from a security point of view. During testing these mistakes can manifest in various ways: 1. Crashing of the component, followed by possible restart. 2. Hanging of the component in a busy-loop where most CPU time is wasted, leading to a denial-of-service situation. 3. Temporary slowdown in processing, leading to a denial-of-service situation if the condition can be sustained. 4. Failure of the component to provide useful services (e.g. network connections are refused), leading to a denial-of-service situation. In the level of programming languages, the possible types of mistakes leading to robustness problems are numerous: missing length checks, pointer arithmetic, index handing failures, memory allocation problems, threading problems, etc. Not all problems may have direct security impact, but in any case their removal promotes reliability of the assessed software component. Page 5 Codenomicon Best Practices

6 2. CODENOMICON TESTING IN THE SDLC Codenomicon robustness tests can be performed at different stages during the software development lifecycle (SDLC), depending on your specific needs and which group inside your organization is in charge of the testing. As a rule of thumb, several well-known studies have shown that it is most costefficient to discover and remediate bugs as early on during the SDLC as possible. This implies that you should position Codenomicon robustness tests into as early a stage in your development cycle as possible, preferably already at the point when the tested implementation is finished just enough that tests can be meaningfully executed against it. However, in the real world this positioning is not always so clear-cut. A security test group in your organization may be in charge of gating products or code releases just before they ship to the customer. Robustness testing can also be applied at that stage, with the only downside being that the gap between the development of the implementation and the discovery of the bug is longer, and that returning the product back to development and remediating the bug will be more costly. Practice has shown that there are several useful positions for robustness testing. Depending on your needs and the different functions inside your organization, you can select if you want to apply the testing only at a single phase or, as recommended, benefit from it throughout the entire development cycle. These are some potential phases where robustness testing can be performed: 1. Early on during development by developers. 2. During QA testing by testers. 3. During final acceptance testing by a product security team. 4. Testing existing products proactively by a product security team. 5. Regression testing during the development of further versions of the product. 6. Acceptance testing or product comparison performed by a customer. Page 6 Codenomicon Best Practices

7 Figure 1. Robustness testing in the SDLC 2.1. PREPARATION, EXECUTION, OBSERVATION, REMEDIATION, REPETITION The typical usage pattern for Codenomicon DEFENSICS tests falls into the five-stage process of planning and preparing for the tests, executing the tests, observing the results, remediating any discovered problems, and repeating the tests to verify that the problems have been fixed. In any testing discipline, the first step is always to prepare well. You should provision, set up and prepare the system(s) you are planning to test. You should identify which interfaces you will be testing. You should understand and be able to change the configurations of the SUT if necessary, or at least have easy access to someone who knows the system well and who can help with this. If you are testing multiple interfaces, you should prioritize the tests so that the most critical interfaces are tested first. During the preparation phase, you also need to make sure you have access to tools Page 7 Codenomicon Best Practices

8 and devices that make the testing easier. Obviously a test workstation is needed for running Codenomicon test tools and gathering the results. See section 3.1 for more details on system requirements. Additionally, having access to debug builds or otherwise instrumented development versions of the SUT will result in more efficient problem identification, root cause analysis and remediation. Also, you should be prepared to provide bug reports to developers and to communicate with product security teams, when necessary. After all necessary preparations have been made, you should set up the Codenomicon tools you will be using and start running tests against the tested system. During the test execution and immediately after it, the results of the tests should be continuously and very carefully observed, validated and verified. Tests that cause issues should always be repeated and the results observed even more closely than on the initial run. Any information on the SUT behavior you can acquire will help in remediating the problem. This information can include crash dumps, core files, log output, debugging messages, stack traces, memory contents, process listings, PCAP packet captures or even ad-hoc descriptions of the observed error mode(s). After the verification process is complete, you should strive to fix any and all of the discovered problems. Even though it would seem that some of the discovered issues are less significant than others (e.g. someone in your organization might try to downplay the severity of some of the problems), they are all quality problems that should be fixed. An important part in the remediation process is producing topical bug reports for developers. If you are a developer, you already know what information you need to have to fix the found issues. If you are a tester, you should first contact your developers to learn what information they expect. Most organizations have an inhouse template for defect reports, so it is out of scope to even attempt to provide one here. Typically a template includes details on the tested component or system (product, model, version, tested component, configuration), a description of the test setup, a description of the problem condition, and reproduction instructions. However, from the standpoint of testing with Codenomicon tools, we can give guidelines as to what information can typically be used in reports. First, you should note down the parameters used for testing. See test tool Summary log for a quick list of test tool settings. A good description of your test setup is also very good to include, possibly including a diagram or picture if the setup is a more elaborate one. A list of the test cases that caused problems during testing is also essential. These Page 8 Codenomicon Best Practices

9 are also available in the Summary log when instrumentation is used. The last and most important detail to include consists of detailed reports on the results of the failing tests. This last item should contain a description of the test case message or message sequence, a description of the anomaly (invalid input) used in that test, and a detailed description of the behavior of the SUT for that test case. The test case information is available from the test tool user documentation as well as by looking at the packet trace executed during the test case. Some users prefer to save the test case traffic to a PCAP file with Wireshark or Ethereal, and to attach that to their problem reports. The information provided by the various instrumentation mechanisms discussed above (crash dumps, core files etc.) should also be included in the reports, if available. As the test cases are fully deterministic, you can repeat the tests at any time as long as the original test setup has not been changed. If a developer wishes to see the crash in action, you can provide him/her with access to the tools (if allowed by your license) or invite them to observe the test execution. If you found transient errors (some test cases caused a failure at some point, but the problem could not be repeated or reduced to any particular test or a range of tests), they should be noted separately, as they may still reveal an actual problem condition. After the issues have been handled by developers and purportedly fixed, it is necessary to verify that these fixes indeed do what they are supposed to. It is a wellknown fact in all development organizations that sometimes developers may fail to fix the problems completely, or they may inadvertently introduce new problems while working on a fix for a previous one. It is your job to verify that this is not the case. This means not only repeating the tests that caused problems in the first place, but also repeating the full set of tests in order to verify that no new problems have been introduced. Repetition in the above process also means repeating the whole testing cycle later on. The tests can and should be used also for regression testing of future versions of the product. This might mean a nightly automated regression run with just the test cases and/or test groups that have caused problems earlier, or a weekly full run that executes the complete set of test cases against the latest version of the product. Here the test setups and configuration details you have saved earlier will come in handy, since you will want to have a repeatable, deterministic setup for running the tests. Page 9 Codenomicon Best Practices

10 For nightly and weekly regression runs, we recommend running the test tool in command-line mode automatically from a script. If you are using a testing framework or a test harness, you should be able to easily define a script for running the test tool in automated fashion. Keep in mind that you also need to have some mechanism of restarting the tested system automatically TESTERS VS. DEVELOPERS Codenomicon DEFENSICS test tools can be used by developers, QA testers as well as product security teams. For each of those groups, the basic usage pattern is roughly the same. The tests are run and the problems fixed. However, communication of the discovered problems as well as the decisions on how to prioritize fixes may become more problematic as the distance from development to test execution grows larger. The specifics of this process depend largely on your organization and how mature your development, QA and security reporting practices are. As a guideline, we recommend applying Codenomicon DEFENSICS tests at all phases during the product lifecycle, not just during development, QA testing and final gating / product security analysis. This will ensure that all of the possible flaws are really caught, fixed and verified responsibly and proactively. However, the decision on where to apply the tests really repends on your own organization SECURITY VS. QA TESTING For traditional QA testers, Codenomicon tests simply reveal quality (robustness) issues just like any other good testing discipline. For security-aware individuals, these same issues resemble security problems that could be later used to launch denial-of-service attacks against the tested system or for taking complete control over it. This is an important distinction, since the same problems may mean different things to different people inside your own organization. Due to this difference in terms, QA testers should be made aware of the severity and potential security impact of these issues. This awareness-raising can also extend to your developers, if they do not yet have deep expertise on security. Practice has shown that good ways to educate QA testers and product developers is to demonstrate how the discovered flaws could be used later by malicious third parties, and to promote security and secure programming and development practices in general. For security testers, it is important to understand the difference between Page 10 Codenomicon Best Practices

11 Codenomicon robustness testing and the wider meaning of "security". All of the problems that our test tools help find are definitely security problems, but all security problems can never be found exhaustively with robustness testing. The full definition of security also includes things like making sure a product does not have default passwords enabled when it is installed, that it does not store user information or its logfiles in an insecure way, and so on. Codenomicon robustness tests are automated black-box negative tests that reveal problems with security implications, but are no substitute for verifying the security of a product in a wider sense. For a full security assessment, you must ensure that your whole development process adheres to good and solid security practices, including design, implementation, various phases of testing as well as incident handling later on during the product lifecycle. Codenomicon robustness testing provides an integral piece of the overall product security by drastically reducing the number of implementation-level flaws in your product already before it gets shipped out METRICS Codenomicon DEFENSICS robustness testing provides a great way to assess the quality and security of your products and your development process over a longer period of time. While it is true that robustness testing is able to identify and help remediate serious problems extremely well in the short run, the true benefits of integrating robustness testing into your product development practices really shine over a longer period of time. To reap the full benefits of Codenomicon testing, you should keep track of all of the bugs and security vulnerabilities found either with robustness testing or reported later in your products over time. By looking at the track record of your products, you can calculate the benefits of remediating a number of problems before the product has shipped. You can also identify further steps you can take to help reduce the total number of discovered problems: education of your developers and product managers, investing in additional DEFENSICS test tools or other development aids, establishing better practices for testing or security verification, and so on. Other metrics that robustness testing can reveal are related to code coverage and input space coverage. Studies have shown that black-box negative testing is able to cover the input parsing code of an implementation well, but it does not by and large exercise those portions of code relating to user interfaces or other unrelated areas. This means that while robustness testing tests the critical input parsing and state machine code extremely, you could perhaps use some of the time spared by the use Page 11 Codenomicon Best Practices

12 of robustness testing by covering those parts of the code that are not exercised. You should use any available code coverage tools along with the test tools to assess which portions of the code are tested well. Another important detail is that code coverage is not really a good metric when discussing the testing requirements of negative testing. A better metric is input space coverage, which provides a figure on how well all of the potential inputs for a protocol or other interface have been covered by the tests. A well-designed set of robustness test cases is able to provide a good combination of input space coverage and input handling code coverage. Input space coverage consists of covering all of the possible structures and substructures of the tested interface with test cases that have been designed to exercise an implementation as thoroughly as possible. When performing network protocol tests, good input space coverage can be achieved by covering each message sequence, message, and message element with tests specifically designed and targeted for each level. Note that this includes every possible protocol feature and extension defined in any available specification, including ones that are obsolete or still in early stages of development. In file formats, good input space coverage is achieved by testing for all possible structures and substructures of the file format in question RESOURCING The total requirements required to run Codenomicon robustness tests are not simply technical. These requirements also include the human and hardware resources you need to allocate to executing the tests and fixing the found problems. Therefore the real amount of resources (total cost) you need to allocate for Codenomicon tests can be derived from a combination of hardware and software requirements and the people who are required to run them successfully. Keep in mind that people also need to maintain their expertise and sometimes receive some training to make their work really worthwhile. One decision you may have to make is to decide how Codenomicon testing will be positioned within the various testing roles in your organization. Many organizations have a dedicated Codenomicon team which takes care of running Codenomicon tests across a number of platforms or products. This approach makes it easier to train a few people to be real experts in Codenomicon testing technology, but it widens the gap between the locations where the problems are introduced and where they are Page 12 Codenomicon Best Practices

13 discovered. This approach is very useful if you already have a horizontal test or security organization in place that is in charge of verifying different products. Another approach is to roll out Codenomicon as an integrated testing solution among other daily tests across all product groups. This approach is also very good, since it necessarily takes the testing closer to the source of the problems, i.e. development. In this model the testing is performed inside a project or product development team, and the results are fed back to development as soon as they are discovered. One more very important aspect to consider is how much time does it take to learn to use Codenomicon DEFENSICS solutions well? What are the recommended skill sets for a DEFENSICS tester? How much training is required to become a poweruser? Since the tools have been designed with usability and test efficiency in mind, this process is rather simple. Starting to use the test tools for the first time should not take longer than 5-10 minutes, provided that the user is familiar with the interface (protocol) that is being tested, and provided that he/she can configure the test subject satisfactorily. The ideal skill set for a Codenomicon tester is some expertise from the protocol area, some experience from the security area, some experience in software testing and some in software development. Additional training will only improve the skill levels of the testers. Mastering the tested protocols and understanding the repercussions of all of the found issues obviously takes more time, and gaining a solid expertise in various network and systems technologies, software security, QA and security testing, secure development practices, process practices, test management and all of the other related disciplines can easily take a whole lifetime. If you think your users could benefit from additional hands-on test tool training, or if you want us to help your users or developers understand the benefits of systematic security testing, we will be more than happy to provide any amount of training for you. Contact your local Codenomicon sales representative for more details. Page 13 Codenomicon Best Practices

14 3. TEST TOOLS BASIC USAGE Since this document is intended as a general set of guidelines to applying Codenomicon DEFENSICS robustness testing to your development process, the usage instructions of each individual Codenomicon test tool is out of the scope of this document. However, we will present some general requirements as well as some practical tips for using the tools, drawing on real-life experiences learned with our various customers. For detailed instructions on the installation and usage of each test tool, please refer to the test tool user manuals and installation instructions supplied with the test tools REQUIREMENTS The following platforms are officially supported for use with Codenomicon test tools: Windows XP Service Pack 2 Fedora Core Linux 6 These platforms are used at Codenomicon to verify each test tool release before it ships. However, since the test tools are Java applications, they work well also on other platforms. Our users are reporting good results in using them on a daily basis on other Windows and Linux flavors as well as on Mac OS X and Solaris. However, your mileage may vary. If you wish to run the tools on an unsupported platform, contact Codenomicon support for more details. The hardware requirements for running Codenomicon tests are as follows: 1 GHz processor or faster 256 MB memory or more (more memory improves performance) 1 GB disk space for test tool installation Additional disk space for saving test tool log files (5-10 GB recommended) Ethernet network interface card Graphics display adapter (for GUI mode) The software requirements for running the tests are as follows: Sun Microsystems Java Runtime Environment update 6 or newer WinPcap 4.0 or newer on Windows platforms (selected test tools only) WinPcap 4.0 is required on Windows only with test tools that inject their test cases Page 14 Codenomicon Best Practices

15 using low-level Ethernet or IP socket interfaces. These test tools include all IPv4 and IPv6 tools, OSPFv2 and OSPFv3, PIM, RSVP, GRE, DVMRP, IS-IS, etc. Note: Sun Microsystems' Java Runtime Environment (JRE) is mandatory for running the tests. The test tools will not work with any other JRE, including GNU Java that is installed by default in some Linux distributions. If you run into a Java error when attempting to install or execute the test tools, please first check your Java version before contacting our tech support. Above we have discussed the technical requirements for running Codenomicon test tools. However, an even more important requirement are the resources needed to perform the tests and to promote the benefits of their results inside your own organization. This was discussed in section INSTALLATION All Codenomicon DEFENSICS test tools are supplied in two formats: a Java installer that can be used to install the test tool onto a local hard disk or network drive, and an ISO image that can be burned onto a CD and used from there. We recommend installing the test tool onto your local hard disk or network drive, if possible. All of the documentation in Codenomicon test tools is provided in HTML format. For this reason, some of our customers who have multiple users running the tools have opted to install the documentation centrally onto an intranet server, where all users can access it easily. This removes the need for all users to install the documentation separately (saving disk space) and allows others in your organization to view the documentation even without access to the actual test tools. Codenomicon test tool licenses can be distributed in two ways: either as a USB key that must be attached to the workstation running the test tool, or as a FlexLM license server license file, which removes the need for USB keys and allows the license to be used by the allotted number of users simultaneously even across multiple sites. For single-seat or limited multi-user usage within the same site, a USB license is recommended. For use by multiple users across multiple sites, the FlexLM license is likely more handy. For more on the different license types and their pricing, please contact your Codenomicon sales agent. Page 15 Codenomicon Best Practices

16 3.3. USING THE TEST TOOLS In a nutshell, the practical usage of Codenomicon DEFENSICS testing solutions typically proceeds as follows: 1. Analyze the system you will be testing. Select which interfaces will be tested and which Codenomicon DEFENSICS test tools will be needed. Decide on your testing strategy and write a test plan, if necessary. 2. Install the relevant test tool(s). If the system has several different interfaces to test, decide which interface is the most critical and start by testing it. 3. Configure the tested system. 4. Configure the test tool. 5. Verify that the test tool can connect to the tested system by executing the valid case(s). 6. Set up as many observation mechanisms as possible: valid-case or external instrumentation, logging both in the test tool and the tested system, enabling debugging and/or profiling in the tested system, and finally observing the test execution manually, if possible. 7. Run the tests, restarting the tested system automatically or manually when a problem occurs. 8. Verify the results by rerunning test cases that caused problems. 9. Fix the found problems and repeat the tests to verify the fixes. 10. Rerun either just the failed test cases or all tests later for regression. 11. Move to test additional interfaces as necessary. Perhaps the most often neglected step above is to observe the tested system closely from as many angles as possible during test execution. The test tools can find crashlevel flaws in your systems, but they can also reveal memory leaks, problems in logging and other output code, and any number of other similar "non-fatal" conditions. As another useful hint, you should not forget to save the details of your test setup for reproducing the test results later. This includes the configuration of the tested system and its version details, the exact setup of your test environment, as well as test tool logfiles with the relevant settings visible. Documenting your test setups well is crucial for sustained usage of the test tools. Another step that is often overlooked is the need to interoperate with the tested system before the tests can be meaningfully executed. Some test setups require careful configuration of the tested system and the test tool so that they can work Page 16 Codenomicon Best Practices

17 well together. Before beginning testing, you should always spend some time running just the valid cases in the beginning of the test materials to see that they complete successfully. The amount of valid cases varies across different test tools, but usually at least some of them should operate against the tested system before it makes sense to run the full set of tests. In some protocols not all valid cases need to always complete successfully for the tests to work; please check the test tool documentation for more details on the valid cases VERIFYING TEST RESULTS In testing, a verdict is typically assigned to a test case (and test group). Possible verdicts are passed, failed and inconclusive. In Codenomicon testing, the failed verdict can be granted for a test case if any of the following criteria are met and the case can be identified to be responsible of it: 1. A fatal failure is triggered in a device causing it to stop functioning normally 2. Process crashes or hangs and needs to be restarted manually. 3. Process crashes and restarts automatically. 4. Process consumes almost all CPU and/or memory resources for an exceptionally long or indefinite time. If no single test case can be pointed out, but similar effects are observed from time to time or over a set of test cases, then the verdict is inconclusive. This occurs when a number of test cases or groups causes a cumulative corruptive effect in the tested system. In some cases, the subject may be corrupted so badly or become so unstable that there is no way to collect accurate test results for a set of tests. This may result in untested regions that must be marked as not tested. Otherwise, the verdict is passed. A test group is considered failed if any single case or combination of cases causes the system under test to fail. A group is inconclusive if vague unwanted behavior is observed, but no single test case or combination of cases can be identified to cause the problems. Otherwise a group is considered as passed. The test tool itself can only determine two conditions: whether or not instrumentation passes successfully after a test case or not. The instrumentation can mean valid-case instrumentation, where a valid interaction (e.g. a valid protocol message sequence) between the test tool and the tested system is executed after each test case, or external instrumentation, where the return code from a userspecified command or script launched from the test tool determines success or Page 17 Codenomicon Best Practices

18 failure for the test case. Examples of external instrumentation scripts include connecting to the tested system over SSH or Telnet and checking whether the test subject is still operational. If valid-case or external instrumentation fails, the instrumentation is repeated until it succeeds (i.e. the system under test recovers). The test tool will proceed to the next actual test case only after instrumentation is successful. External instrumentation allows the user to specify another command or script that can be executed if the instrumentation itself fails. This could be e.g. a script that connects remotely to the tested system and restarts the test subject. Depending on the results of the instrumentation, the test tool will report the result of the test case as either passed, degradation-of-service or denial-of-service. A result of "passed" means that the first round of instrumentation was successful after that test case. A result of "degradation-of-service" means that more than one but less than four instrumentation rounds had to be performed after the test case before instrumentation was successful. A result of "denial-of-service" means that more than five instrumentation attempts were needed before instrumentation was successful. Note that if no valid-case or external instrumentation is used, the test tool cannot know the health of the tested system after each test case and will therefore report the result as "n/a". Page 18 Codenomicon Best Practices

19 4. ADVANCED USAGE Our customers often start using Codenomicon test tools in graphical user mode, configuring the settings and starting test runs manually. Later, once the basic usage has become ingrained, the users promote to running the tools from the command line or in remote prompt mode, integrating them into their own test harnesses, scripts or frameworks, and running the tests automatically against a number of different test beds with varying configurations. One example of advanced usage is to run the test tool in remote prompt mode. This mode allows the user to use telnet to connect to the test tool and to issue configuration and execution commands through the remote connection. The remote mode usage can be automated further using Tcl or Expect scripts, depending on which you feel more comfortable using. Another hint for advanced usage is to use the external instrumentation feature to its full potential. Even though external instrumentation is presented as an additional feature to check the health of the tested system, it is really a general extension mechanism for the test tool. External instrumentation allows the user to launch any scripts from within the test tool and to include the output and results of those scripts directly into the test tool log files. This means that you could do things like print out portions of the tested system's log files into the test tool logs, debug the tested system on the fly in various ways, make periodical snapshots of the tested system during test execution and so on. You can even control other Codenomicon test tools through the external instrumentation mechanism SEVEN TIPS FOR MORE EFFECTIVE ROBUSTNESS TESTING A few short tips go a long way in ensuring that you get the maximum benefits out of your robustness testing using Codenomicon products. Use valid-case instrumentation. Valid-case instrumentation is the best possible way to assess the health of the implementation during the test run. It correlates each potential crash exactly to a particular input (test case), which is hard or impossible to do if you are trying to observe the implementation independently. Of course, validcase instrumentation requires that the protocol has some form of request-response sequence where the test tool can expect a response from the tested system. In some protocols and in client-side tests, this is not usually possible. For these scenarios, you should use external instrumentation. Page 19 Codenomicon Best Practices

20 Restart your implementation automatically after a problem is found. A typical failure mode for the bugs we help find is a total crash of the tested implementation. If an implementation crashes, it does not make sense to continue running the tests before it recovers. If you can make the tested implementation restart and/or reset its state automatically after a problem is found, this allows you to easily perform completely automated test runs. You can leave the tests running overnight and come back in the morning to observe the results. Remember, you can use external instrumentation to trigger a script that can restart your implementation, if you do not already have some mechanism to restart it automatically. Use external instrumentation. Valid-case instrumentation can only be used for protocols and test setups where a request-response sequence can be meaningfully executed. For other scenarios, external instrumentation can be applied to check the health of the implementation. You can define a script that uses Telnet or SSH to connect to the tested system and checks its health, or in some cases you can just ping the tested system to see if it is still operational. External instrumentation provides another mechanism to correlate test results to particular test cases, and it should not be neglected. It also provides the additional capability to restart the implementation automatically, which is another useful feature for test automation. Try also combining valid-case instrumentation and the "invoked if instrumentation fails" script option in the external instrumentation settings to perform a restart if valid-case instrumentation fails. Observe your implementation as closely as possible. After you have run the initial set of tests, observed the most obvious crashes in your implementation, and issued fixes for those crashes, you need to start thinking what other conditions might be revealed by systematic robustness testing. You should then perform further test runs observing your implementation as closely as possible using white-box testing aids: debuggers, profilers, memory checkers, and code coverage tools. You could even run automated code analysis suites around those portions of the code that have been proven to contain errors, as it is likely that they contain a higher percentage of problems than the software as a whole. Keep the tests handy for fix verification and regression testing. Some of our customers run regression tests nightly or weekly by running the test tools from a UNIX cron run in command-line mode. Depending on your existing regression harnesses or testing frameworks, you may have another preferred way of doing this. Nevertheless, since our tests are completely deterministic, they lend themselves extremely well to repeated regression testing and fix verification. Run robustness tests alongside performance / load tests. A device or system Page 20 Codenomicon Best Practices

21 may behave differently when it is subjected to extreme load conditions. When a system is loaded close to 100%, timings between different functions and other constructs in the code become elongated, which may cause new anomalous behavior to suddenly emerge. Sometimes these problems may be completely different than the ones you can find when running Codenomicon tests without subjecting the system under a high load. Therefore, once you have run the initial set of tests against your target, you could try running load tests in parallel to see if you can identify more obscure problems. However, keep in mind that discovering especially timing problems may take time and may require repetition before the problem condition can be triggered. You should leave the tests running for a longer period of time to catch these problems. Request tests for new interfaces from Codenomicon. We can build excellent robustness tests for any interface with relative ease. We have experience in making software fail. Therefore, if you have an interface (protocol or file format) which our products do not yet cover, but which you would think would benefit from testing, just contact us at any time. We will be happy to consider making tests for any new and interesting interface! Page 21 Codenomicon Best Practices

22 5. CONTACTING CODENOMICON Finally, if you have any further questions on using Codenomicon DEFENSICS test tools or integrating them into your development lifecycle, please feel free to contact us at any time at We will be happy to help you get the full benefits from testing done with Codenomicon tools. Some additional answers to frequently-asked questions on tool usage, technical details and process issues can also be found on the support section of our web site at For licensing and other sales-related questions, please contact your local Codenomicon sales representative or us at Page 22 Codenomicon Best Practices

23 6. CONCLUSION It is our sincere wish that you are able to reap the full benefits of the complete protocol testing solution provided by Codenomicon DEFENSICS. Adopting the tools throughout your development lifecycle allows you to discover critical problems at an early stage, which in turn minimizes the risk of critical errors discovered later in the field. Solid, proactive defense against security problems and quality issues improves your corporate image, helps you protect against potential liability concerns, and allows you to concentrate on your core business in a world filled with constant attempts to exploit and attack any vulnerable systems. Page 23 Codenomicon Best Practices

24 APPENDICES APPENDIX A MY TESTING CHECKLIST TESTED SYSTEM: VERSION: TEST TOOL: VERSION: TESTER: Before testing: [ ] SUT has been set up properly [ ] Test setup has been documented [ ] No firewalls or other unintended components are in place between the test tool and the tested system [ ] One or more valid cases work against the SUT [ ] Instrumentation is enabled [ ] SUT is restarted automatically if a problem occurs [ ] SUT logs and all other available debugging information are being captured After testing: [ ] Cases causing failures have been noted and verified [ ] Failures have been reported to development [ ] Fixes have been verified by repeating failed tests [ ] All tests have been repeated to find any new issues that may have been introduced Page 24 Codenomicon Best Practices

25 APPENDIX B MY TEST REPORT TESTED PRODUCT: VERSION: TEST TOOL: DATE: TESTER: SUMMARY OF RESULTS: TEST CASES CAUSING FAILURES: INCONCLUSIVE TEST CASES: SUGGESTED ACTIONS: Attachments: [ ] Description of the test setup [ ] Detailed descriptions of the found problems [ ] Packet captures of the test traffic causing problems [ ] SUT logs or other information to help remediation Page 25 Codenomicon Best Practices

26 APPENDIX C - GLOSSARY Robustness Robustness is the ability of software to tolerate exceptional input and stressful environment conditions. A piece of software which is not robust fails when facing such circumstances. A malicious intruder can take advantage of robustness problems and compromise the system running the software. Most security vulnerabilities reported in the public are caused by robustness weaknesses. Robustness testing tools are intended to improve robustness: they are able to pinpoint flaws in an implementation by sending invalid and completely malformed inputs to it. Black-box testing In black-box testing, no details of the tested implementation need to be available for the tester. The test tool only needs to be able to connect to the tested protocol interface. Specifically, no source code for the tested implementation is required for black-box testing. Although black-box testing is possible even without access to the tested implementation, usually some method of restarting the implementation after a crash is required. Black-box testing differs from white-box testing in that black-box testing does not require access to source code, the capability to run a debugger on the tested software, or any information about the internal state or health of the implementation. Fuzz testing Fuzz testing or fuzzing is similar to robustness testing, but fuzzing is usually performed with random data ("white noise"). In robustness testing test cases are more carefully designed and planned beforehand, with the invalid data representing something that has been known to discover problems in the past. If given enough time, fuzz testing can cover the whole possible input space, but practice has shown that predesigned robustness test cases are much more effective in finding problems in software than completely random inputs. PROTOS PROTOS is the name of a famous Oulu University Secure Programming Group (OUSPG) research project into investigating and improving implementation-level robustness. The testing methods used by Codenomicon were originally devised in the PROTOS project. Some people know robustness testing as PROTOS testing. For many security professionals worldwide, PROTOS represented a whole new way of looking at regimented and automated security testing. Page 26 Codenomicon Best Practices

Lecture 15 Software Testing

Lecture 15 Software Testing Lecture 15 Software Testing Includes slides from the companion website for Sommerville, Software Engineering, 10/e. Pearson Higher Education, 2016. All rights reserved. Used with permission. Topics covered

More information

Chapter 9. Software Testing

Chapter 9. Software Testing Chapter 9. Software Testing Table of Contents Objectives... 1 Introduction to software testing... 1 The testers... 2 The developers... 2 An independent testing team... 2 The customer... 2 Principles of

More information

Product Security. for Consumer Devices. Anton von Troyer Codenomicon. all rights reserved.

Product Security. for Consumer Devices. Anton von Troyer Codenomicon. all rights reserved. Product Security Anton von Troyer for Consumer Devices About Codenomicon Founded in Autumn 2001 Commercialized the academic approach built since 1996 Technology leader in security test automation Model-based,

More information

Bridge Course On Software Testing

Bridge Course On Software Testing G. PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY Accredited by NAAC with A Grade of UGC, Approved by AICTE, New Delhi Permanently Affiliated to JNTUA, Ananthapuramu (Recognized by UGC under 2(f) and 12(B)

More information

Part 5. Verification and Validation

Part 5. Verification and Validation Software Engineering Part 5. Verification and Validation - Verification and Validation - Software Testing Ver. 1.7 This lecture note is based on materials from Ian Sommerville 2006. Anyone can use this

More information

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams

Three General Principles of QA. COMP 4004 Fall Notes Adapted from Dr. A. Williams Three General Principles of QA COMP 4004 Fall 2008 Notes Adapted from Dr. A. Williams Software Quality Assurance Lec2 1 Three General Principles of QA Know what you are doing. Know what you should be doing.

More information

SECURITY TRAINING SECURITY TRAINING

SECURITY TRAINING SECURITY TRAINING SECURITY TRAINING SECURITY TRAINING Addressing software security effectively means applying a framework of focused activities throughout the software lifecycle in addition to implementing sundry security

More information

Continuously Discover and Eliminate Security Risk in Production Apps

Continuously Discover and Eliminate Security Risk in Production Apps White Paper Security Continuously Discover and Eliminate Security Risk in Production Apps Table of Contents page Continuously Discover and Eliminate Security Risk in Production Apps... 1 Continuous Application

More information

QA Best Practices: A training that cultivates skills for delivering quality systems

QA Best Practices: A training that cultivates skills for delivering quality systems QA Best Practices: A training that cultivates skills for delivering quality systems Dixie Neilson QA Supervisor Lynn Worm QA Supervisor Maheen Imam QA Analyst Information Technology for Minnesota Government

More information

Enhancing the Cybersecurity of Federal Information and Assets through CSIP

Enhancing the Cybersecurity of Federal Information and Assets through CSIP TECH BRIEF How BeyondTrust Helps Government Agencies Address Privileged Access Management to Improve Security Contents Introduction... 2 Achieving CSIP Objectives... 2 Steps to improve protection... 3

More information

9 th CA 2E/CA Plex Worldwide Developer Conference 1

9 th CA 2E/CA Plex Worldwide Developer Conference 1 1 Introduction/Welcome Message Organizations that are making major changes to or replatforming an application need to dedicate considerable resources ot the QA effort. In this session we will show best

More information

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1 Verification vs validation Verification: "Are we building the product right?. The software should

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK AUTOMATION TESTING IN SOFTWARE DEVELOPEMENT KALPESH PARMAR Persistent Systems Limited,

More information

Overview. Handling Security Incidents. Attack Terms and Concepts. Types of Attacks

Overview. Handling Security Incidents. Attack Terms and Concepts. Types of Attacks Overview Handling Security Incidents Chapter 7 Lecturer: Pei-yih Ting Attacks Security Incidents Handling Security Incidents Incident management Methods and Tools Maintaining Incident Preparedness Standard

More information

Chapter 8 Software Testing. Chapter 8 Software testing

Chapter 8 Software Testing. Chapter 8 Software testing Chapter 8 Software Testing 1 Topics covered Introduction to testing Stages for testing software system are: Development testing Release testing User testing Test-driven development as interleave approach.

More information

Device Discovery for Vulnerability Assessment: Automating the Handoff

Device Discovery for Vulnerability Assessment: Automating the Handoff Device Discovery for Vulnerability Assessment: Automating the Handoff O V E R V I E W While vulnerability assessment tools are widely believed to be very mature and approaching commodity status, they are

More information

THE CONTRAST ASSESS COST ADVANTAGE

THE CONTRAST ASSESS COST ADVANTAGE WHITEPAPER THE CONTRAST ASSESS COST ADVANTAGE APPLICATION SECURITY TESTING COSTS COMPARED WELCOME TO THE ERA OF SELF-PROTECTING SOFTWARE CONTRASTSECURITY.COM EXECUTIVE SUMMARY Applications account for

More information

Implementing ITIL v3 Service Lifecycle

Implementing ITIL v3 Service Lifecycle Implementing ITIL v3 Lifecycle WHITE PAPER introduction GSS INFOTECH IT services have become an integral means for conducting business for all sizes of businesses, private and public organizations, educational

More information

It was a dark and stormy night. Seriously. There was a rain storm in Wisconsin, and the line noise dialing into the Unix machines was bad enough to

It was a dark and stormy night. Seriously. There was a rain storm in Wisconsin, and the line noise dialing into the Unix machines was bad enough to 1 2 It was a dark and stormy night. Seriously. There was a rain storm in Wisconsin, and the line noise dialing into the Unix machines was bad enough to keep putting garbage characters into the command

More information

Tips for Effective Patch Management. A Wanstor Guide

Tips for Effective Patch Management. A Wanstor Guide Tips for Effective Patch Management A Wanstor Guide 1 Contents + INTRODUCTION + UNDERSTAND YOUR NETWORK + ASSESS THE PATCH STATUS + TRY USING A SINGLE SOURCE FOR PATCHES + MAKE SURE YOU CAN ROLL BACK +

More information

PEACHTECH PEACH API SECURITY AUTOMATING API SECURITY TESTING. Peach.tech

PEACHTECH PEACH API SECURITY AUTOMATING API SECURITY TESTING. Peach.tech PEACH API SECURITY AUTOMATING API SECURITY TESTING Peach.tech Table of Contents Introduction... 3 Industry Trends... 3 API growth... 3 Agile and Continuous Development Frameworks... 4 Gaps in Tooling...

More information

Secure Development Processes

Secure Development Processes Secure Development Processes SecAppDev2009 What s the problem? Writing secure software is tough Newcomers often are overwhelmed Fear of making mistakes can hinder Tend to delve into security superficially

More information

SECURITY TESTING PROCESS IN SDLC

SECURITY TESTING PROCESS IN SDLC Khaja Shariff Page 1 7/20/2009 SECURITY TESTING PROCESS IN SDLC Khaja Shariff Page 2 7/20/2009 Table of Contents 1. Introduction... 3 1.1 Description... 3 1.2. Purpose... 3 2. Security Testing process...

More information

Client-server application testing plan

Client-server application testing plan Client-server application testing plan 1. INTRODUCTION The present plan contains and describes testing strategy principles applied for remote access system testing. The plan is intended to be used by project

More information

A company built on security

A company built on security Security How we handle security at Flywheel Flywheel was founded in 2012 on a mission to create an exceptional platform to help creatives do their best work. As the leading WordPress hosting provider for

More information

Chapter 5: Vulnerability Analysis

Chapter 5: Vulnerability Analysis Chapter 5: Vulnerability Analysis Technology Brief Vulnerability analysis is a part of the scanning phase. In the Hacking cycle, vulnerability analysis is a major and important part. In this chapter, we

More information

Using Static Code Analysis to Find Bugs Before They Become Failures

Using Static Code Analysis to Find Bugs Before They Become Failures Using Static Code Analysis to Find Bugs Before They Become Failures Presented by Brian Walker Senior Software Engineer, Video Product Line, Tektronix, Inc. Pacific Northwest Software Quality Conference,

More information

Beta Mobile app Testing guidelines

Beta Mobile app Testing guidelines Beta Mobile app Testing guidelines Quality Assurance (QA) plays an important role in the mobile applications development life cycle, but many overlook the critical nature of this piece of the app development

More information

SDLC Maturity Models

SDLC Maturity Models www.pwc.com SDLC Maturity Models SecAppDev 2017 Bart De Win Bart De Win? 20 years of Information Security Experience Ph.D. in Computer Science - Application Security Author of >60 scientific publications

More information

Development*Process*for*Secure* So2ware

Development*Process*for*Secure* So2ware Development*Process*for*Secure* So2ware Development Processes (Lecture outline) Emphasis on building secure software as opposed to building security software Major methodologies Microsoft's Security Development

More information

Taking White Hats to the Laundry: How to Strengthen Testing in Common Criteria

Taking White Hats to the Laundry: How to Strengthen Testing in Common Criteria Taking White Hats to the Laundry: How to Strengthen Testing in Common Criteria Apostol Vassilev, Principal Consultant September 23,2009. Product Testing in Common Criteria Product Testing in Common Criteria

More information

Types of Software Testing: Different Testing Types with Details

Types of Software Testing: Different Testing Types with Details Types of Software Testing: Different Testing Types with Details What are the different Types of Software Testing? We, as testers are aware of the various types of Software Testing such as Functional Testing,

More information

TERMINOLOGY MANAGEMENT DURING TRANSLATION PROJECTS: PROFESSIONAL TESTIMONY

TERMINOLOGY MANAGEMENT DURING TRANSLATION PROJECTS: PROFESSIONAL TESTIMONY LINGUACULTURE, 1, 2010 TERMINOLOGY MANAGEMENT DURING TRANSLATION PROJECTS: PROFESSIONAL TESTIMONY Nancy Matis Abstract This article briefly presents an overview of the author's experience regarding the

More information

Examination Questions Time allowed: 1 hour 15 minutes

Examination Questions Time allowed: 1 hour 15 minutes Swedish Software Testing Board (SSTB) International Software Testing Qualifications Board (ISTQB) Foundation Certificate in Software Testing Practice Exam Examination Questions 2011-10-10 Time allowed:

More information

Sample Exam Syllabus

Sample Exam Syllabus ISTQB Foundation Level 2011 Syllabus Version 2.9 Release Date: December 16th, 2017. Version.2.9 Page 1 of 46 Dec 16th, 2017 Copyright 2017 (hereinafter called ISTQB ). All rights reserved. The authors

More information

An Introduction to Runtime Analysis with Rational PurifyPlus

An Introduction to Runtime Analysis with Rational PurifyPlus Copyright Rational Software 2002 http://www.therationaledge.com/content/dec_02/t_runtimepurifyplus_gb.jsp An Introduction to Runtime Analysis with Rational PurifyPlus by Goran Begic Technical Marketing

More information

Automated Firewall Change Management Securing change management workflow to ensure continuous compliance and reduce risk

Automated Firewall Change Management Securing change management workflow to ensure continuous compliance and reduce risk Automated Firewall Change Management Securing change management workflow to ensure continuous compliance and reduce risk Skybox Security Whitepaper January 2015 Executive Summary Firewall management has

More information

CONFERENCE PROCEEDINGS QUALITY CONFERENCE. Conference Paper Excerpt from the 28TH ANNUAL SOFTWARE. October 18th 19th, 2010

CONFERENCE PROCEEDINGS QUALITY CONFERENCE. Conference Paper Excerpt from the 28TH ANNUAL SOFTWARE. October 18th 19th, 2010 PACIFIC NW 28TH ANNUAL SOFTWARE QUALITY CONFERENCE October 18th 19th, 2010 Conference Paper Excerpt from the CONFERENCE PROCEEDINGS Permission to copy, without fee, all or part of this material, except

More information

Last time. Security Policies and Models. Trusted Operating System Design. Bell La-Padula and Biba Security Models Information Flow Control

Last time. Security Policies and Models. Trusted Operating System Design. Bell La-Padula and Biba Security Models Information Flow Control Last time Security Policies and Models Bell La-Padula and Biba Security Models Information Flow Control Trusted Operating System Design Design Elements Security Features 10-1 This time Trusted Operating

More information

TEL2813/IS2820 Security Management

TEL2813/IS2820 Security Management TEL2813/IS2820 Security Management Security Management Models And Practices Lecture 6 Jan 27, 2005 Introduction To create or maintain a secure environment 1. Design working security plan 2. Implement management

More information

Up and Running Software The Development Process

Up and Running Software The Development Process Up and Running Software The Development Process Success Determination, Adaptative Processes, and a Baseline Approach About This Document: Thank you for requesting more information about Up and Running

More information

Advanced Security Tester Course Outline

Advanced Security Tester Course Outline Advanced Security Tester Course Outline General Description This course provides test engineers with advanced skills in security test analysis, design, and execution. In a hands-on, interactive fashion,

More information

How Turner Broadcasting can avoid the Seven Deadly Sins That. Can Cause a Data Warehouse Project to Fail. Robert Milton Underwood, Jr.

How Turner Broadcasting can avoid the Seven Deadly Sins That. Can Cause a Data Warehouse Project to Fail. Robert Milton Underwood, Jr. How Turner Broadcasting can avoid the Seven Deadly Sins That Can Cause a Data Warehouse Project to Fail Robert Milton Underwood, Jr. 2000 Robert Milton Underwood, Jr. Page 2 2000 Table of Contents Section

More information

McAFEE PROFESSIONAL SERVICES. Unisys ClearPath OS 2200 Security Assessment White Paper

McAFEE PROFESSIONAL SERVICES. Unisys ClearPath OS 2200 Security Assessment White Paper McAFEE PROFESSIONAL SERVICES Unisys ClearPath OS 2200 Security Assessment White Paper Prepared for Unisys Corporation April 25, 2017 Table of Contents Executive Summary... 3 ClearPath Forward OS 2200 Summary...

More information

Automating the Top 20 CIS Critical Security Controls

Automating the Top 20 CIS Critical Security Controls 20 Automating the Top 20 CIS Critical Security Controls SUMMARY It s not easy being today s CISO or CIO. With the advent of cloud computing, Shadow IT, and mobility, the risk surface area for enterprises

More information

Question No: 1 After running a packet analyzer on the network, a security analyst has noticed the following output:

Question No: 1 After running a packet analyzer on the network, a security analyst has noticed the following output: Volume: 75 Questions Question No: 1 After running a packet analyzer on the network, a security analyst has noticed the following output: Which of the following is occurring? A. A ping sweep B. A port scan

More information

Software Quality. Richard Harris

Software Quality. Richard Harris Software Quality Richard Harris Part 1 Software Quality 143.465 Software Quality 2 Presentation Outline Defining Software Quality Improving source code quality More on reliability Software testing Software

More information

Achieving End-to-End Security in the Internet of Things (IoT)

Achieving End-to-End Security in the Internet of Things (IoT) Achieving End-to-End Security in the Internet of Things (IoT) Optimize Your IoT Services with Carrier-Grade Cellular IoT June 2016 Achieving End-to-End Security in the Internet of Things (IoT) Table of

More information

csc444h: so(ware engineering I matt medland

csc444h: so(ware engineering I matt medland csc444h: so(ware engineering I matt medland matt@cs.utoronto.ca http://www.cs.utoronto.ca/~matt/csc444 tes2ng top- 10 infrastructure source code control including other types of testing reproducible builds

More information

1.1 For Fun and Profit. 1.2 Common Techniques. My Preferred Techniques

1.1 For Fun and Profit. 1.2 Common Techniques. My Preferred Techniques 1 Bug Hunting Bug hunting is the process of finding bugs in software or hardware. In this book, however, the term bug hunting will be used specifically to describe the process of finding security-critical

More information

Specialized Security Services, Inc. REDUCE RISK WITH CONFIDENCE. s3security.com

Specialized Security Services, Inc. REDUCE RISK WITH CONFIDENCE. s3security.com Specialized Security Services, Inc. REDUCE RISK WITH CONFIDENCE s3security.com Security Professional Services S3 offers security services through its Security Professional Services (SPS) group, the security-consulting

More information

Your Data and Artificial Intelligence: Wise Athena Security, Privacy and Trust. Wise Athena Security Team

Your Data and Artificial Intelligence: Wise Athena Security, Privacy and Trust. Wise Athena Security Team Your Data and Artificial Intelligence: Wise Athena Security, Privacy and Trust Wise Athena Security Team Contents Abstract... 3 Security, privacy and trust... 3 Artificial Intelligence in the cloud and

More information

Enabling Performance & Stress Test throughout the Application Lifecycle

Enabling Performance & Stress Test throughout the Application Lifecycle Enabling Performance & Stress Test throughout the Application Lifecycle March 2010 Poor application performance costs companies millions of dollars and their reputation every year. The simple challenge

More information

Metrics That Matter: Quantifying Software Security Risk

Metrics That Matter: Quantifying Software Security Risk Metrics That Matter: Quantifying Software Security Risk Brian Chess Fortify Software 2300 Geng Road, Suite 102 Palo Alto, CA 94303 1-650-213-5600 brian@fortifysoftware.com Abstract Any endeavor worth pursuing

More information

Game keystrokes or Calculates how fast and moves a cartoon Joystick movements how far to move a cartoon figure on screen figure on screen

Game keystrokes or Calculates how fast and moves a cartoon Joystick movements how far to move a cartoon figure on screen figure on screen Computer Programming Computers can t do anything without being told what to do. To make the computer do something useful, you must give it instructions. You can give a computer instructions in two ways:

More information

Xerox Device Data Collector 1.1 Security and Evaluation Guide

Xerox Device Data Collector 1.1 Security and Evaluation Guide Xerox Device Data Collector 1.1 Security and Evaluation Guide 2009 Xerox Corporation. All rights reserved. Xerox, WorkCentre, Phaser and the sphere of connectivity design are trademarks of Xerox Corporation

More information

shortcut Tap into learning NOW! Visit for a complete list of Short Cuts. Your Short Cut to Knowledge

shortcut Tap into learning NOW! Visit  for a complete list of Short Cuts. Your Short Cut to Knowledge shortcut Your Short Cut to Knowledge The following is an excerpt from a Short Cut published by one of the Pearson Education imprints. Short Cuts are short, concise, PDF documents designed specifically

More information

OWASP Top 10 The Ten Most Critical Web Application Security Risks

OWASP Top 10 The Ten Most Critical Web Application Security Risks OWASP Top 10 The Ten Most Critical Web Application Security Risks The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain

More information

Product Security Program

Product Security Program Product Security Program An overview of Carbon Black s Product Security Program and Practices Copyright 2016 Carbon Black, Inc. All rights reserved. Carbon Black is a registered trademark of Carbon Black,

More information

The SD-WAN security guide

The SD-WAN security guide The SD-WAN security guide How a flexible, software-defined WAN can help protect your network, people and data SD-WAN security: Separating fact from fiction For many companies, the benefits of SD-WAN are

More information

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011 Higher-order Testing Stuart Anderson Defining Higher Order Tests 1 The V-Model V-Model Stages Meyers version of the V-model has a number of stages that relate to distinct testing phases all of which are

More information

Seqrite Endpoint Security

Seqrite Endpoint Security Enterprise Security Solutions by Quick Heal Integrated enterprise security and unified endpoint management console Enterprise Suite Edition Product Highlights Innovative endpoint security that prevents

More information

BUILDING APPLICATION SECURITY INTO PRODUCTION CONTAINER ENVIRONMENTS Informed by the National Institute of Standards and Technology

BUILDING APPLICATION SECURITY INTO PRODUCTION CONTAINER ENVIRONMENTS Informed by the National Institute of Standards and Technology BUILDING APPLICATION SECURITY INTO PRODUCTION CONTAINER ENVIRONMENTS Informed by the National Institute of Standards and Technology ebook BUILDING APPLICATION SECURITY INTO PRODUCTION CONTAINER ENVIRONMENTS

More information

Introduction to Software Testing

Introduction to Software Testing Introduction to Software Testing Software Testing This paper provides an introduction to software testing. It serves as a tutorial for developers who are new to formal testing of software, and as a reminder

More information

SECURITY AUTOMATION BEST PRACTICES. A Guide on Making Your Security Team Successful with Automation SECURITY AUTOMATION BEST PRACTICES - 1

SECURITY AUTOMATION BEST PRACTICES. A Guide on Making Your Security Team Successful with Automation SECURITY AUTOMATION BEST PRACTICES - 1 SECURITY AUTOMATION BEST PRACTICES A Guide on Making Your Security Team Successful with Automation SECURITY AUTOMATION BEST PRACTICES - 1 Introduction The best security postures are those that are built

More information

How to Break Software by James Whittaker

How to Break Software by James Whittaker How to Break Software by James Whittaker CS 470 Practical Guide to Testing Consider the system as a whole and their interactions File System, Operating System API Application Under Test UI Human invokes

More information

CS 356 Operating System Security. Fall 2013

CS 356 Operating System Security. Fall 2013 CS 356 Operating System Security Fall 2013 Review Chapter 1: Basic Concepts and Terminology Chapter 2: Basic Cryptographic Tools Chapter 3 User Authentication Chapter 4 Access Control Lists Chapter 5 Database

More information

Measuring VDI Fitness and User Experience Technical White Paper

Measuring VDI Fitness and User Experience Technical White Paper Measuring VDI Fitness and User Experience Technical White Paper 3600 Mansell Road Suite 200 Alpharetta, GA 30022 866.914.9665 main 678.397.0339 fax info@liquidwarelabs.com www.liquidwarelabs.com Table

More information

The Business Case for Security in the SDLC

The Business Case for Security in the SDLC The Business Case for Security in the SDLC Make Security Part of your Application Quality Program Otherwise, Development Teams Don t View it is Part of their Job The notion of application quality, which

More information

Hacker Academy Ltd COURSES CATALOGUE. Hacker Academy Ltd. LONDON UK

Hacker Academy Ltd COURSES CATALOGUE. Hacker Academy Ltd. LONDON UK Hacker Academy Ltd COURSES CATALOGUE Hacker Academy Ltd. LONDON UK TABLE OF CONTENTS Basic Level Courses... 3 1. Information Security Awareness for End Users... 3 2. Information Security Awareness for

More information

Internet Scanner 7.0 Service Pack 2 Frequently Asked Questions

Internet Scanner 7.0 Service Pack 2 Frequently Asked Questions Frequently Asked Questions Internet Scanner 7.0 Service Pack 2 Frequently Asked Questions April 2005 6303 Barfield Road Atlanta, GA 30328 Tel: 404.236.2600 Fax: 404.236.2626 Internet Security Systems (ISS)

More information

Hello, and welcome to a searchsecurity.com. podcast: How Security is Well Suited for Agile Development.

Hello, and welcome to a searchsecurity.com. podcast: How Security is Well Suited for Agile Development. [ MUSIC ] Hello, and welcome to a searchsecurity.com podcast: How Security is Well Suited for Agile Development. My name is Kyle Leroy, and I'll be moderating this podcast. I'd like to start by introducing

More information

In-Memory Fuzzing in JAVA

In-Memory Fuzzing in JAVA Your texte here. In-Memory Fuzzing in JAVA 2012.12.17 Xavier ROUSSEL Summary I. What is Fuzzing? Your texte here. Introduction Fuzzing process Targets Inputs vectors Data generation Target monitoring Advantages

More information

SYMANTEC: SECURITY ADVISORY SERVICES. Symantec Security Advisory Services The World Leader in Information Security

SYMANTEC: SECURITY ADVISORY SERVICES. Symantec Security Advisory Services The World Leader in Information Security SYMANTEC: SECURITY ADVISORY SERVICES Symantec Security Advisory Services The World Leader in Information Security Knowledge, as the saying goes, is power. At Symantec we couldn t agree more. And when it

More information

CERT C++ COMPLIANCE ENFORCEMENT

CERT C++ COMPLIANCE ENFORCEMENT CERT C++ COMPLIANCE ENFORCEMENT AUTOMATED SOURCE CODE ANALYSIS TO MAINTAIN COMPLIANCE SIMPLIFY AND STREAMLINE CERT C++ COMPLIANCE The CERT C++ compliance module reports on dataflow problems, software defects,

More information

Data Protection. Plugging the gap. Gary Comiskey 26 February 2010

Data Protection. Plugging the gap. Gary Comiskey 26 February 2010 Data Protection. Plugging the gap Gary Comiskey 26 February 2010 Data Protection Trends in Financial Services Financial services firms are deploying data protection solutions across their enterprise at

More information

Oracle Developer Studio Code Analyzer

Oracle Developer Studio Code Analyzer Oracle Developer Studio Code Analyzer The Oracle Developer Studio Code Analyzer ensures application reliability and security by detecting application vulnerabilities, including memory leaks and memory

More information

Security Testing: Terminology, Concepts, Lifecycle

Security Testing: Terminology, Concepts, Lifecycle Security Testing: Terminology, Concepts, Lifecycle Ari Takanen, CTO, Codenomicon Ltd. Ian Bryant, Technical Director, UK TSI 1 About the Speakers Ari Takanen Researcher/Teacher 1998-2002 @University of

More information

Bring Your Own Device (BYOD)

Bring Your Own Device (BYOD) Bring Your Own Device (BYOD) An information security and ediscovery analysis A Whitepaper Call: +44 345 222 1711 / +353 1 210 1711 Email: cyber@bsigroup.com Visit: bsigroup.com Executive summary Organizations

More information

Test Oracles. Test Oracle

Test Oracles. Test Oracle Encontro Brasileiro de Testes de Software April 23, 2010 Douglas Hoffman, BACS, MBA, MSEE, ASQ-CSQE, ASQ-CMQ/OE, ASQ Fellow Software Quality Methods, LLC. (SQM) www.softwarequalitymethods.com doug.hoffman@acm.org

More information

Security Engineering for Software

Security Engineering for Software Security Engineering for Software CS996 CISM Jia An Chen 03/31/04 Current State of Software Security Fundamental lack of planning for security Most security issues come to light only after completion of

More information

Product Security Briefing

Product Security Briefing Product Security Briefing Performed on: Adobe ColdFusion 8 Information Risk Management Plc 8th Floor Kings Building Smith Square London SW1 P3JJ UK T +44 (0)20 7808 6420 F +44 (0)20 7808 6421 Info@irmplc.com

More information

THINGS YOU NEED TO KNOW ABOUT USER DOCUMENTATION DOCUMENTATION BEST PRACTICES

THINGS YOU NEED TO KNOW ABOUT USER DOCUMENTATION DOCUMENTATION BEST PRACTICES 5 THINGS YOU NEED TO KNOW ABOUT USER DOCUMENTATION DOCUMENTATION BEST PRACTICES THIS E-BOOK IS DIVIDED INTO 5 PARTS: 1. WHY YOU NEED TO KNOW YOUR READER 2. A USER MANUAL OR A USER GUIDE WHAT S THE DIFFERENCE?

More information

Testing. Prof. Clarkson Fall Today s music: Wrecking Ball by Miley Cyrus

Testing. Prof. Clarkson Fall Today s music: Wrecking Ball by Miley Cyrus Testing Prof. Clarkson Fall 2017 Today s music: Wrecking Ball by Miley Cyrus Review Previously in 3110: Modules Specification (functions, modules) Today: Validation Testing Black box Glass box Randomized

More information

INTRODUCTION TO SOFTWARE ENGINEERING

INTRODUCTION TO SOFTWARE ENGINEERING INTRODUCTION TO SOFTWARE ENGINEERING Introduction to Software Testing d_sinnig@cs.concordia.ca Department for Computer Science and Software Engineering What is software testing? Software testing consists

More information

"Charting the Course to Your Success!" Securing.Net Web Applications Lifecycle Course Summary

Charting the Course to Your Success! Securing.Net Web Applications Lifecycle Course Summary Course Summary Description Securing.Net Web Applications - Lifecycle is a lab-intensive, hands-on.net security training course, essential for experienced enterprise developers who need to produce secure.net-based

More information

IPS with isensor sees, identifies and blocks more malicious traffic than other IPS solutions

IPS with isensor sees, identifies and blocks more malicious traffic than other IPS solutions IPS Effectiveness IPS with isensor sees, identifies and blocks more malicious traffic than other IPS solutions An Intrusion Prevention System (IPS) is a critical layer of defense that helps you protect

More information

CS 161 Computer Security. Security Throughout the Software Development Process

CS 161 Computer Security. Security Throughout the Software Development Process Popa & Wagner Spring 2016 CS 161 Computer Security 1/25 Security Throughout the Software Development Process Generally speaking, we should think of security is an ongoing process. For best results, it

More information

Software Security and Exploitation

Software Security and Exploitation COMS E6998-9: 9: Software Security and Exploitation Lecture 8: Fail Secure; DoS Prevention; Evaluating Components for Security Hugh Thompson, Ph.D. hthompson@cs.columbia.edu Failing Securely and Denial

More information

Security Automation Best Practices

Security Automation Best Practices WHITEPAPER Security Automation Best Practices A guide to making your security team successful with automation TABLE OF CONTENTS Introduction 3 What Is Security Automation? 3 Security Automation: A Tough

More information

Secure Development Lifecycle

Secure Development Lifecycle Secure Development Lifecycle Strengthening Cisco Products The Cisco Secure Development Lifecycle (SDL) is a repeatable and measurable process designed to increase Cisco product resiliency and trustworthiness.

More information

NOTHING IS WHAT IT SIEMs: COVER PAGE. Simpler Way to Effective Threat Management TEMPLATE. Dan Pitman Principal Security Architect

NOTHING IS WHAT IT SIEMs: COVER PAGE. Simpler Way to Effective Threat Management TEMPLATE. Dan Pitman Principal Security Architect NOTHING IS WHAT IT SIEMs: COVER PAGE Simpler Way to Effective Threat Management TEMPLATE Dan Pitman Principal Security Architect Cybersecurity is harder than it should be 2 SIEM can be harder than it should

More information

CimTrak Product Brief. DETECT All changes across your IT environment. NOTIFY Receive instant notification that a change has occurred

CimTrak Product Brief. DETECT All changes across your IT environment. NOTIFY Receive instant notification that a change has occurred DETECT All changes across your IT environment With coverage for your servers, network devices, critical workstations, point of sale systems, and more, CimTrak has your infrastructure covered. CimTrak provides

More information

LEARN READ ON TO MORE ABOUT:

LEARN READ ON TO MORE ABOUT: For a complete picture of what s going on in your network, look beyond the network itself to correlate events in applications, databases, and middleware. READ ON TO LEARN MORE ABOUT: The larger and more

More information

Software Engineering 2 A practical course in software engineering. Ekkart Kindler

Software Engineering 2 A practical course in software engineering. Ekkart Kindler Software Engineering 2 A practical course in software engineering Quality Management Main Message Planning phase Definition phase Design phase Implem. phase Acceptance phase Mainten. phase 3 1. Overview

More information

The Path Not Taken: Maximizing the ROI of Increased Decision Coverage

The Path Not Taken: Maximizing the ROI of Increased Decision Coverage The Path Not Taken: Maximizing the ROI of Increased Decision Coverage Laura Bright Laura_bright@mcafee.com Abstract Measuring code coverage is a popular way to ensure that software is being adequately

More information

Effective Threat Modeling using TAM

Effective Threat Modeling using TAM Effective Threat Modeling using TAM In my blog entry regarding Threat Analysis and Modeling (TAM) tool developed by (Application Consulting and Engineering) ACE, I have watched many more Threat Models

More information

CYSE 411/AIT 681 Secure Software Engineering. Topic #6. Seven Software Security Touchpoints (III) Instructor: Dr. Kun Sun

CYSE 411/AIT 681 Secure Software Engineering. Topic #6. Seven Software Security Touchpoints (III) Instructor: Dr. Kun Sun CYSE 411/AIT 681 Secure Software Engineering Topic #6. Seven Software Security Touchpoints (III) Instructor: Dr. Kun Sun Reading This lecture [McGraw]: Ch. 7-9 2 Seven Touchpoints 1. Code review 2. Architectural

More information

Synology Security Whitepaper

Synology Security Whitepaper Synology Security Whitepaper 1 Table of Contents Introduction 3 Security Policy 4 DiskStation Manager Life Cycle Severity Ratings Standards Security Program 10 Product Security Incident Response Team Bounty

More information

SECURITY & PRIVACY DOCUMENTATION

SECURITY & PRIVACY DOCUMENTATION Okta s Commitment to Security & Privacy SECURITY & PRIVACY DOCUMENTATION (last updated September 15, 2017) Okta is committed to achieving and preserving the trust of our customers, by providing a comprehensive

More information