Test/Debug Guide. Reference Pages. Test/Debug Guide. Site Map Index

Size: px
Start display at page:

Download "Test/Debug Guide. Reference Pages. Test/Debug Guide. Site Map Index"

Transcription

1 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide Test/Debug Guide The following topics explain how to use the SB Test/Debug perspective in StreamBase Studio to run your StreamBase application and verify that it is functioning as expected, and explains how to use the related command-line utilities. Reference Pages StreamBase Environment Variables StreamBase Java System Properties StreamBase Expression Language Functions Contents StreamBase Command Prompt Running StreamBase Applications Overview of Running Applications Running Applications in Studio Editing Launch Configurations Running Applications from the Command Line Running Multiple StreamBase Applications Attaching to Running StreamBase Applications Attaching to a Running Application Attaching to a Running Server in Debug Mode Using Feed Simulations Feed Simulation Overview Running Feed Simulations Manual Input of Data Using the Feed Simulation Editor Feed Simulation with a JDBC Data Source Feed Simulation Timestamp Options Feed Simulation with Custom File Reader Map to Sub-Fields Option Command Line Feed Simulations Recording and Playing Back Data Streams Debugging StreamBase Applications Debugging Overview Using the EventFlow Debugger Test/Debug Guide 1

2 Intermediate Stream Dequeuing Trace Debugging Runtime Tracing and Creating Trace Files Trace Debugging in StreamBase Studio StreamBase JUnit Tests Creating and Running StreamBase JUnit Tests Editing StreamBase JUnit Tests StreamBase JUnit Test Tutorial StreamBase Tests (sbtest) Creating and Running StreamBase Tests Using the StreamBase Test Editor Creating and Running Test Suites Runtime Error Logging Monitoring Running Applications Profiling Operator Performance Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Reference Pages 2

3 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase Command Prompt StreamBase Command Prompt Contents Introduction Environment Settings Opening a StreamBase Command Prompt StreamBase 64-Bit Command Prompt Setting the Environment Globally More Than One StreamBase Installation Running As Administrator on Windows Vista and Windows 7 Introduction The StreamBase Command Prompt is a feature of the Windows version of StreamBase. Recent StreamBase releases provide a link in the Start menu for the StreamBase n.m Command Prompt, where n and m are the major and minor StreamBase release numbers. This link opens a Windows Command Prompt session with its environment pre-set for running StreamBase command-line utilities. On 64-bit StreamBase installations, the installer also provides a link named StreamBase n.m 64-bit Command Prompt. Environment Settings The following table summarizes the Windows environment settings set in the StreamBase Command Prompt in recent releases of StreamBase. Environment Variable Notes STREAMBASE_HOME Absolute path to the top-level StreamBase installation directory. PATH For 32-bit StreamBase Command Prompts, prepends %STREAMBASE_HOME%\jdk\bin and %STREAMBASE_HOME%\bin to the PATH. For 64-bit StreamBase Command Prompts, prepends %STREAMBASE_HOME%\jdk64\bin, %STREAMBASE_HOME%\bin64, and %STREAMBASE_HOME%\bin to the PATH. StreamBase Command Prompt 3

4 CLASSPATH PYTHONPATH JAVA_HOME Prepends %STREAMBASE_HOME%\lib\sbclient.jar to the CLASSPATH. Appends %STREAMBASE_HOME%\lib\python2.6 to the PYTHONPATH. If JAVA_HOME is not set, sets it to %STREAMBASE_HOME%\jdk or %STREAMBASE_HOME%\jdk64, as appropriate for the StreamBase Command Prompt version. Opening a StreamBase Command Prompt Open a StreamBase Command Prompt with the following command sequence: Start â (All) Programs â StreamBase n.m â StreamBase n.m Command Prompt You can also select the top level of a project folder in the Studio Package Explorer, right-click, and select Open StreamBase Command Prompt Here from the context menu. This opens a StreamBase Command Prompt whose current directory is the selected project's directory in the Studio workspace. StreamBase 64-Bit Command Prompt On supported 64-bit Windows platforms, you can install the StreamBase kit for 64-bit Windows, as described in Installing and Running StreamBase on 64-Bit Windows. In this case, the Start menu for your StreamBase release contains an entry for the StreamBase 64-bit Command Prompt, in addition to the regular StreamBase Command Prompt. The StreamBase 64-bit Command Prompt is a Windows command prompt with %STREAMBASE_HOME%\bin64 on the PATH in addition to %STREAMBASE_HOME%\bin. When you run StreamBase Server from the StreamBase 64-bit Command Prompt, you invoke the 64-bit version of the server. On 64-bit Windows, if you are configuring your Windows environment with the sb-config command, or you are installing a Windows service with sbd --install-service, then be sure to use the appropriate version of the StreamBase Command Prompt. The 64-bit Command Prompt installs the 64-bit server and configures for the 64-bit environment. The standard StreamBase Command Prompt installs and configures for the 32-bit environment. Setting the Environment Globally If you have only one version of StreamBase installed, you might prefer to have the StreamBase environment set globally, instead of set only in a StreamBase Command Prompt. In this configuration, you could use any Windows Command Prompt session to run StreamBase utilities, and would not be restricted to using the StreamBase Command Prompt. This configuration might be appropriate for a deployment system running a production StreamBase application. Important Test/Debug Guide If you have more than one StreamBase installation on the same Windows machine, do not use sb-config to modify your global environment, or use it with caution, as discussed below in More Than One StreamBase Installation. Environment Settings 4

5 Use the sb-config utility to add StreamBase environment variables to your global environment, as described in the reference page sb-config. Use the --setenv option to modify the currently logged-in Windows user's environment. Use the --setsysenv option to modify the environment for all users. More Than One StreamBase Installation If you have more than one StreamBase installation on the same Windows machine, the use of sb-config and the StreamBase Command Prompt change accordingly. There are two cases, described in the following sections. Multiple Current StreamBase Installations Follow these rules if you have, on the same Windows machine, two or more recent StreamBase installations, such as one 6.4 and one 6.5 installation: 1. Use the StreamBase Command Prompt link in the Start menu of a particular 6.x installation to run StreamBase utilities from that installation. The following example command sequences open two separate StreamBase Command Prompts, each set with the environment from its StreamBase installation. Start â (All) Programs â StreamBase 6.4 â StreamBase 6.4 Command Prompt Start â (All) Programs â StreamBase 6.5 â StreamBase 6.5 Command Prompt 2. If you prefer to have a primary StreamBase release installed, and you want to set the global Windows environment for that primary release, then use the sb-config utility with caution. Caution Be sure to run the correct sb-config utility when using the --setenv and --setsysenv options. You have one sb-config utility for each installation. Before running sb-config, open a StreamBase Command Prompt for the installation you want to become the primary system default, and verify that you have the intended release with the following command: sb-config --version Test/Debug Guide If you use sb-config --setsysenv or --setenv to set the global environment for a primary StreamBase release, then thereafter, that release's utilities are available in any standard Windows command session. To use the StreamBase commands for a non-primary installation, you must explicitly open a StreamBase Command Prompt for the non-primary release. Mixed Old and New StreamBase Installations Follow these rules if you have, on the same Windows machine, one StreamBase 3.x installation and one or more StreamBase 5.x or 6.x installation: 1. Do not use the 5.x or 6.x sb-config utility to modify your machine's global environment. This would overwrite the StreamBase 3.x global environment settings. Once overwritten, they can only be restored by manual edits or by reinstalling StreamBase 3.x. 2. Use any standard Windows Command Prompt to run utilities from your StreamBase 3.x installation. 3. Use the StreamBase Command Prompt in your StreamBase 5.x or 6.x installation to run utilities from that installation. Important 5

6 Caution With mixed 3.x and 5.0 or later installations, the StreamBase 3.x bin directory is still in the PATH, after the 5.x or 6.x bin directory. This means utilities from 3.x that do not have matching files in 5.x or 6.x can still be inadvertently run at the 5.x or 6.x StreamBase Command Prompt. Running As Administrator on Windows Vista and Windows 7 For running most utilities and samples on Windows Vista and Windows 7, you can use the normal StreamBase Command Prompt as installed in the Start menu. However, with User Account Control (UAC) enabled, certain StreamBase commands will fail silently. These include commands that attempt to modify the global environment (sb-config --setenv and --setsysenv), or that attempt to write to the Windows registry (sbd --install-service and --remove-service). To run such commands on Vista or 7 with UAC enabled, you must run the StreamBase Command Prompt with administrator privileges. Follow these steps: 1. Click the Windows orb in the lower left corner of the Taskbar. 2. Click All Programs. 3. Open the StreamBase n.m folder, where n.m represents your release number. 4. Right-click the StreamBase Command Prompt entry and select Run as administrator from the context menu. 5. Click Continue in the resulting User Account Control dialog. Caution 6

7 A StreamBase Command Prompt opened with administrator privileges shows the word Administrator in the title bar, and opens by default in the Windows\system32 directory instead of the StreamBase installation directory. To navigate to the StreamBase installation directory, use the following command: cd %STREAMBASE_HOME% Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Running As Administrator on Windows Vista and Windows7 7

8 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications Running StreamBase Applications This section describes how to run StreamBase applications and deployment files in StreamBase Studio and from the command prompt. Contents Overview of Running Applications Running Applications in Studio Editing Launch Configurations Running Applications from the Command Line Running Multiple StreamBase Applications Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Running StreamBase Applications 8

9 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications > Overview of Running Applications Overview of Running Applications Contents What Running an Application Means Local and Remote StreamBase Servers Run History Running Compared to Debugging Starting and Stopping Applications See Also The topics in this section describe the ways to run and stop a StreamBase EventFlow or StreamSQL application, both within and outside of StreamBase Studio. What Running an Application Means To run a StreamBase application means to send an application to an instance of StreamBase Server, which executes the application code you send, and opens any input and output ports specified in the application's design. You can start StreamBase Server from within StreamBase Studio, or you can run it from the command line with the sbd command. StreamBase Server accepts applications in one of five forms, shown in the following table: StreamBase Application Format An EventFlow application A StreamSQL application A deployment file A precompiled application archive file Format type File extension Can be run from EventFlow source code.sbapp Studio or sbd StreamSQL source code.ssql Studio or sbd XML configuration file that designates one or more EventFlow or StreamSQL files to run. Archive file containing precompiled and runnable binary code..sbdeploy.sbar Studio or sbd sbd or in Studio automatically (not on demand) An application bundle.sbbundle sbd only Overview of Running Applications 9

10 Archive file containing all source files, resource files, and configuration files necessary to run an application on any StreamBase Server host. Applications in EventFlow or StreamSQL format are applications in source code form. In these cases, the server accepts the application code, compiles it into runtime format, then runs the compiled application. StreamBase deployment files are XML configuration files that designate an application to run in the container named default, optionally including other application modules to run in other containers. When you run a deployment file, StreamBase actually runs the EventFlow or StreamSQL module specified in the deployment file. See Deployment File Overview. Precompiled application archive files are discussed in Precompiled Application Archives. Application bundle files and the bundling process are discussed in Application Bundling. Local and Remote StreamBase Servers From the point of view of StreamBase Studio, an instance of StreamBase Server is local or remote: A local StreamBase Server runs on the same machine currently hosting StreamBase Studio. A remote StreamBase Server runs on a supported UNIX machine independent of the machine running StreamBase Studio. StreamBase Studio can launch both local and remote instances of StreamBase Server. Running Locally When Studio starts StreamBase Server locally, it takes the following steps: Studio launches %STREAMBASE_HOME%\bin\sbd.exe, passing it the name of the top-level application file to run. On UNIX, Studio launches $STREAMBASE_HOME/bin/sbd. On 64-bit Windows, Studio launches %STREAMBASE_HOME%\bin64\sbd.exe. If a server configuration file named sbd.sbconf is present at the root of the Studio project folder, Studio parses it, reconciles it with its internal typecheck environment, generates a one-time, temporary configuration file, and passes the temporary file to sbd. See How Studio Uses Server Configuration Files for details. Running Remotely Test/Debug Guide Before attempting to launch a Server instance remotely from Studio, set up and test SSH connectivity separately, outside of Studio. You can use the ssh utility on UNIX, or a public domain program such as PuTTY on Windows. SSH connections from Studio to a remote machine work with both password and keyboard-interactive authentication methods, but that authentication must be configured and tested before attempting to use it in Studio. When Studio starts StreamBase Server remotely, it attempts to make an SSH connection to the specified server, using a specified user name. You can specify the server and user name globally for all remote Studio launches in Studio Preferences (Window â Preferences, then StreamBase Studio â Launching). Or you can specify the server and user name separately for each project in an application-specific launch What Running an Application Means 10

11 configuration, described in Editing Launch Configurations. Once Studio successfully logs into the remote server over an SSH connection, it copies the specified EventFlow or StreamSQL application files and its generated configuration file to the remote host, then attempts to run /usr/bin/sbd with the appropriate command line arguments. Note Because remote Studio launches require an SSH connection, StreamBase Studio does not support remotely launching StreamBase Server on Windows hosts. StreamBase Server does run on supported Windows platforms, when launched with sbd locally on the Windows host itself, or when launched locally from Studio running on that host. The only limitation is the inability to launch Server remotely from Studio on a Windows machine to Server on a remote Windows machine. Run History Test/Debug Guide StreamBase Studio keeps a history of each application run, debugged, or traced. When you run, debug, or trace an application using a launch configuration, an entry for that launch configuration is placed at the top of the history list. You can quickly re-run an application by invoking its launch configuration in the Run History list. By contrast, when you run, debug, or trace an application using Run As, Debug As, or Trace As from the Run menu, no entry is placed in the history list. Studio shows separate Run History, Debug History, and Trace History entries in the Run menu, but Studio only preserves one history list. The same launch configuration name is placed at the top of all three history lists when run from a launch configuration. Running Compared to Debugging There are similarities and differences between running StreamBase applications and running them in debug mode. Running and debugging have the following features in common: In Studio, you can specify and save both run configurations and debug configurations. In both cases, a saved configuration lets you re-run or re-debug with the same runtime parameters. Studio provides a standard run configuration that is automatically used if you don't specify your own launch configuration. Studio uses this default configuration the first time you run, debug, or trace an application. You can enqueue test data or live data in the same ways to a running application and to one being debugged. Running and debugging have the following differences: A running application stays running until you stop it, whether or not it is receiving input data. If the flow of enqueued data stops, the application continues running. An application run in debug mode honors breakpoints set in your application, and runs until it reaches the first breakpoint. Thereafter, you can step through the application one instruction at a time. Running Remotely 11

12 Starting and Stopping Applications The following topics provide the details of running and stopping StreamBase applications. See Running Applications in Studio to learn about running and stopping an application in StreamBase Studio, using the default run configuration. See Editing Launch Configurations to learn about creating and saving custom run configurations for your StreamBase applications. See Running Applications from the Command Line to learn about running and stopping applications at the Windows or UNIX command prompt. See Running Multiple StreamBase Applications to learn about running more than one StreamBase application at the same time. See Also See Debugging Overview to learn about debugging instead of running your application. Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Starting and Stopping Applications 12

13 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications > Running Applications in Studio Running Applications in Studio Contents Before Running an Application How to Tell Which Application is Running Launch Configurations Running an Application with Default Configuration Running an Application with a Custom Configuration Prompts for Unsaved Changes Stopping a Running Application This topic describes how to run and stop a StreamBase EventFlow or StreamSQL application or a StreamBase deployment file from StreamBase Studio. To run your application with non-default settings, see Editing Launch Configurations. To debug your application, see Debugging Overview. Before Running an Application A StreamBase application can only be run when it is: Saved to disk Free of typecheck errors See Typechecking to learn how to recognize and correct typecheck errors in your application. How to Tell Which Application is Running There are two ways to tell when an application is currently running: Application tab in the Editor window A small green triangle is overlaid on the icon in the running application's Editor tab. The triangle disappears when you stop the application. Running Applications in Studio 13

14 A currently running text message is shown in the information bar at the top of the Editor canvas of the running application, including the full StreamBase URI and container name. A message about Another application is shown in the information bar of other Editor sessions, as illustrated below. Notice that the information bar can be collapsed to an icon by clicking the arrow on the right side. Click the icon's arrow to restore the information bar. StreamBase icon in the lower left corner of Studio window When no application is running, the icon is not active. When an application is running, the icon acquires a small green triangle. To see the name and location of the running application and its server port, hover your mouse over the icon. When running a deployment file, the hover text shows the application specified to run in the default container. Launch Configurations Every run of a StreamBase application occurs in the context of a launch configuration that defines the parameters for the run. You can specify a custom set of runtime parameters and save them as a named launch configuration, as described in Editing Launch Configurations. StreamBase Studio also provides a default launch configuration, which allows you to run your application right away without stopping to edit a custom launch configuration. Running an Application with Default Configuration To run an application on the local StreamBase Server using the default run configuration, first make sure your application or deployment file is currently active in its Editor view, and is saved and free of typecheck errors. Then use any of these methods: Click the Run button ( ) on the toolbar. This runs or re-runs the EventFlow, StreamSQL, or deployment file in the currently active Editor view. Press Ctrl+F11. Alt+Shift+X, B. That is, press and hold the Alt, Shift, and X keys at the same time. Release them, and immediately press the B key. Select Run â Run As â StreamBase Application from the top-level menu. (This option only appears in the Run menu if the currently selected Editor view is an EventFlow, StreamSQL, or deployment file editor.) Right-click in the canvas of your application in its Editor view, and select Run As â StreamBase Application from the context menu. Right-click the name of your top-level application's EventFlow, StreamSQL, or deployment file in the Package Explorer view, and select Run As â StreamBase Application from the context menu. Click the down-arrow next to the toolbar's Run button, and select Run As â StreamBase Application from the menu. How to Tell Which Application is Running 14

15 When you run an application with the default configuration, no entry is placed in the Run â Run History list. Running an Application with a Custom Configuration Use the Run Configurations dialog to edit, name, and save a run configuration. See Editing Launch Configurations. To run an application with a custom configuration, invoke the configuration's name in one of the following ways: Open the Run Configurations dialog as described in Open a Launch Configuration Dialog. Select the name of the launch configuration in the contents pane on the left, then click the Run button. Select Run â Run History â your-config-name from the top-level menu. Click the down-arrow next to the toolbar's Run button, and select your configuration's name from the menu. Prompts for Unsaved Changes If the specified application has any unsaved changes, StreamBase may display the Save and Launch dialog. The Save and Launch dialog offers to enable the Eclipse feature that automatically saves applications before running them. This dialog is displayed when the Auto-save Options in your Studio preferences is set to Ask me next time (its default setting). The dialog displays a list of the EventFlow, StreamSQL, or Deployment File Editor sessions for files associated with the application you are trying to run. The check boxes are pre-selected for files that need saving before the run request can proceed. You can clear the check boxes for support files such as feed simulation files, but you must allow the top-level EventFlow, StreamSQL, or deployment file to be saved before running it. If you select the Always save resources before saving check box, Studio changes the Auto-save option in your workspace preferences to Save without prompting. The next time you run an application or deployment with unsaved changes, it is automatically saved and run immediately. Cancel dismisses the dialog and the application is not run. Running an Application with Default Configuration 15

16 By default, running an application causes StreamBase Studio to switch from the SB Authoring perspective to the SB Test/Debug perspective. (If you do not want this to happen automatically, you can change it in Studio Preference Settings.) Stopping a Running Application To stop a running application, follow these steps: Test/Debug Guide 1. Select the Editor tab for the running application. 2. Stop the selected application with one of these actions: Press the F9 key. Select Stop StreamBase Application from Studio's Run menu. Click the Stop the Running StreamBase Application icon on the toolbar. Stopping an application also stops any feed simulations that may be running to test the application. By default, stopping an application causes Studio to switch back to the SB Authoring perspective. You can change this behavior in Studio Preference Settings. Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Prompts for Unsaved Changes 16

17 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications > Editing Launch Configurations Editing Launch Configurations Contents Introduction Use Cases for Launch Configurations Open a Launch Configuration Dialog Create a New Launch Configuration Edit an Existing Launch Configuration The Main Tab The Advanced Tab The Containers Tab The Source (Java) Tab The Environment Tab The Common Tab Launch Configuration Perspective-Change Preferences Introduction This topic describes how to use the Run Configuration, Debug Configuration, and Trace Configuration dialogs. All three dialogs have the same usage and allow you to set the same settings. StreamBase Studio automatically generates a launch configuration the first time you run, debug, or trace debug an EventFlow or StreamSQL module, or a deployment file. Studio creates a single named launch configuration for each application, whether for running, debugging, or tracing. You can copy the default-created launch configuration and rename it if you want to maintain separate run and debug launch configurations. The Run Configuration, Debug Configuration, and Trace Configuration dialogs are nearly identical, and are treated on this page as if they are the same dialog. When referring generically to any of the three dialogs, this page uses the term launch configuration dialog. Use Cases for Launch Configurations You can save sets of launch configurations with different names, to be used in several ways: Editing Launch Configurations 17

18 You can save a single run-debug-trace configuration for each StreamBase application. (This is Studio's default.) You can save separate run, debug, and trace configurations for each application, with different parameters for the three conditions. You can save several run configurations for any one application, to run and test it with different runtime parameters. For example, you can quickly switch between testing the same application against both local and remote servers with separate configurations for each server. You can specify a run configuration instead of a top-level application file when exporting a StreamBase bundle. See Application Bundling. Open a Launch Configuration Dialog Open the Run Configuration, Debug Configuration, and Trace Configuration dialogs using any of the following methods: Select Run â Run Configurations from the top-level menu (or Run â Debug Configurations or Run â Trace Configurations). Select Run Configurations from the drop-down arrow next to the Run ( ) button in the Studio toolbar (or next to the Debug ( ) or Trace ( ) buttons. Right-click in the EventFlow Editor canvas for your application. From the context menu, select Run As â Run Configurations. In the Package Explorer, right-click the name of your application's EventFlow or StreamSQL file. From the context menu, select Run As â Run Configurations. Launch configurations are stored as metadata in your Studio workspace. Thus, by default, your set of named launch configurations is common to all your StreamBase projects. However, a launch configuration includes the pathname to the individual application to be launched, so you cannot share launch configurations among StreamBase applications. Create a New Launch Configuration Studio automatically creates a new launch configuration for you the first time you run, debug, or trace a module, with the configuration taking the name of the top-level application being run. These launch configurations are created with default settings, and you can edit or copy them to specify custom settings. You can also create a new launch configuration from scratch. To do this, open the Run Configurations, Debug Configurations, or Trace Configurations dialog and do one of the following: In the left pane, right-click StreamBase Application, and choose New in the context menu. In the left pane, select StreamBase Application and click the New toolbar button. Select the name of an existing StreamBase launch configuration and click the Duplicate toolbar button. Use Cases for Launch Configurations 18

19 Edit an Existing Launch Configuration To edit an existing launch configuration: Open the Run Configurations, Debug Configurations, or Trace Configurations dialog. Select the name of the configuration to edit. Make changes in the tabs of the wizard and click the Apply button before you leave each tab. Click Revert to undo changes since the last saved version. After applying your edits, you can: Click Close to preserve your edits and exit the dialog. In the Run Configurations dialog, click Run to run the edited launch configuration. In the Debug Configurations dialog, click Run to run and debug the edited launch configuration. In the Trace Configurations dialog, click Trace to run the edited launch configuration, generate a tuple trace file, then automatically open the SB Trace Debugger perspective. The Main Tab Test/Debug Guide Name A configuration name is generated for you. Change the name according to your site's standards. The name you enter is not reflected in the navigator pane on the left until you click Apply. Application or Deployment File Specify the path to the StreamBase EventFlow, StreamSQL application file, or StreamBase deployment file to be run, debugged, or traced. Paths are measured from the root of the current Studio workspace. The easiest way to enter the correct path format is to browse for an application file using the Browse button. The resulting dialog shows a list of all EventFlow, StreamSQL, and deployment files in your workspace. Select the one you want and click OK. Edit an Existing Launch Configuration 19

20 Target Server Choose one of the following options: Local Launches StreamBase Server on this machine. Use 32-bit StreamBase Server instead of 64-bit version This option only appears when StreamBase is installed with the 64-bit version of the StreamBase installer on 64-bit Windows, and Studio detects the bin64 folder in %STREAMBASE_HOME%. Under those circumstances, Studio automatically launches the 64-bit version of the server. Select this check box to override that default and use the 32-bit server. This option is dimmed if you select either Remote option. Remote using workspace defaults Launches StreamBase Server on a separate UNIX machine using default server settings. If you previously defined a default server machine and username in the Studio preferences dialog, those defaults are automatically used. See Launching Panel for those settings. Remote using the following target server Launches StreamBase Server on a separate UNIX machine with settings that you specify for this launcher only (any workspace defaults are ignored). Fill in the next two fields: Server machine Enter the name of the remote host on which to invoke StreamBase Server. The remote host name must be a valid DNS name, IP address, or the string localhost, and can be optionally followed by a colon and the port number of the SSH server on that host. Studio uses the standard SSH port 22, if you do not specify a port. The remote server must be accessible from this machine using SSH. For example, enter: wayfast.example.com:22 Username Enter a valid SSH username on the specified remote host. The username must be already set up and tested for SSH connectivity to the specified host. StreamBase supports both password and keyboard-interactive authentication methods with the remote SSH server. Any remote server designation here is ignored when making an executable bundle with the sbbundle command, or with the StreamBase Export dialog described on Application Bundling. Advanced Server Options Compile StreamBase application in separate process The Main Tab 20

21 If StreamBase Server is unable to start and shows an error message about insufficient heap or memory, this option might help by using external processes for the compilation tasks. This option primarily affects large application launches for running, debugging, or tracing on memory-constrained systems, such as 32-bit Windows, but may improve launch times for other platforms as well. Setting this option is the same as setting the streambase.codegen.generate-bytecodes and its related system properties, as described in StreamBase Java System Properties. The Advanced Tab Test/Debug Guide Debugging and Tracing Options The check box for the Enable Intermediate Stream Dequeue field allows you to dequeue from intermediate streams while your application is running, for temporary debugging purposes only. You can optionally specify a regular expression to match against the names of intermediate streams in the top-level module and all sub-modules. Intermediate streams whose name matches the provided expression as a substring are exposed for dequeuing; all other intermediate streams are not. See Intermediate Stream Dequeuing for more information. The check box for the Filter trace data using this regular expression when tracing field allows you to specify a regular expression that narrows the tuples stored in the trace output. Specify a pattern that matches against a component's full module path. For example, enter an expression that specifies a single output stream: ".*TradesOutputStream", or one that specifies a particular module: ".*ModuleName.*". You can specify more than one match string in the pattern: ".*ModuleRef1.*.*Quotes.*" The Advanced Tab 21

22 The Allow Remote Java Debugging check box is for debugging embedded Java modules you have added to your StreamBase application. Specify the port number to which a remote Java debugger will connect to your Java module. For example, you might be using Eclipse's Remote Java Application Debug Launch Configuration to debug embedded Java code live, running inside a StreamBase Server instance that was launched with this option checked. Server Logging Level Select Current Studio logging level to use the logging level that Studio knows about at launch time. If you start Studio with a higher log level, launch configurations that use this setting will use the higher logging level. Select Specified to override the global logging level. The higher the log level number you specify, the more messages and message types are sent to the Console View and/or to an output log file, if this launch configuration specifies one. Persistent Data Options This option specifies a directory to be used for persistent data in disk-based Query Tables (available only in StreamBase Enterprise Edition). This is the same option specified as --datadir when running sbd from the command line. Use the default or specify an existing directory, accessible to and writable by StreamBase Server. Working Directory Use this option to change the working directory for a local launch. Any files output by your running application (for example, generated CSV files) are created relative to launch-time working directory. By default, the working directory is the folder that contains the application file you are launching. Click Specified to enter a different working directory. This option is ignored for remote target servers. The Containers Tab Test/Debug Guide The Container tab is active if you specify running an EventFlow or StreamSQL application in the Main tab. If you specify a StreamBase deployment file in the Main tab, the Container tab is passive, and serves only to display the containers, parameters, and container connections specified in the deployment file. All buttons in the tab are dimmed and inactive under these conditions; in this case, make any necessary container and parameter changes by editing and saving the deployment file. The Containers Tab 22

23 Running Applications in Containers Use the Containers tab to specify one or more containers to hold and run other StreamBase modules that you want to start at the same time as the primary application specified on the Main tab. The primary application is shown in bold in the Application grid. StreamBase containers are described on Container Overview. You can also specify containers, parameters, and container connections in a StreamBase deployment file, then specify that deployment file as the application to run on the Main tab. With multiple applications running in containers, the names of streams in applications running in non-default containers are prefixed with the container name. Thus, sideplay.outgoingbids refers to the output stream named OutgoingBids in the module running in the container named sideplay. Most of the features of running an application in Studio apply equally to modules running in containers. This includes: In the Manual Input view, you can send tuples manually to input streams in non-default containers. The Application Input view shows tuples sent to all input streams, including those in non-default containers. The Application Output view shows tuples from all output streams in all running containers. However, running multiple containers in Studio has the following limitations: Running Applications in Containers 23

24 The primary StreamBase application, the one specified on the Main tab, is always started in a container named default. You cannot change the name of the container for the primary application. You cannot run separate feed simulations in Studio for the primary application and for applications running in containers. Only streams in the default container can be addressed. Setting Module Parameters Click the New Parameter button, and specify the Name and Value settings. The parameter name you specify must be already defined with the Parameters tab of the EventFlow Editor for the module specified to run in the selected container. See Using Module Parameters for more on module parameters. Making Container Connections Test/Debug Guide Container connections are global for the Studio project, much like connections specified in the <container-connections> element of a StreamBase deployment file. Use the Container Connections grid to specify one or more connections between containers. Container connections are described in Container Connections. You will typically use this feature to specify a stream-to-stream connection, where an input stream in one application and container receives tuples from an output stream in another application and container. However, you can specify any container connection type in the Containers tab that you can specify in a deployment file or interactively with the sbadmin command, as described in Container Connections. Click the New Connection button, and specify the Source and Destination streams, using stream names qualified with the container name, as shown in this example: You can use Ctrl+Space in the Source and Destination fields to invoke auto-completion to show the list of streams in the applications listed in the Containers and Parameters grid. Select an existing container connection line to activate the Edit, Duplicate, and Remove buttons. To create a container connection that is very similar to an existing connection, use the Duplicate button to copy the existing connection, then select the new connection and use the Edit button to modify the new connection. Container Start Order Use the Move Up and Move Down buttons to specify container start order. Studio starts containers in the order listed in the top grid in the Containers tab. For some container connections to work, you must specify a non-default container start order. For example, if you specify a container connection that causes the primary application, A, to read from a stream in another container, B, then the module in B must be started before A. Setting Module Parameters 24

25 It is possible to specify container connections in the application itself, using the Container connection field in the Advanced tab of the Properties view for input and output streams. In this case, you may need to adjust the container start order to allow the container with the outgoing stream to start before the container with the incoming stream. The Source (Java) Tab Only Debug Configuration dialogs include the Source (Java) tab, which defines the location of source files used to display source when debugging Java code as part of your StreamBase application. By default, these settings are derived from the current project's Java Build Path. You can override the default settings in the new tab. The Environment Tab Test/Debug Guide The settings in the Environment tab are common to all Eclipse launch configurations. Use Eclipse Help to learn how more about these options. Use the Environment tab to specify or override environment variables with which to launch the application specified in the Main tab. You can define new, application-specific environment variables with the New button, or you can override or append to existing system environment variables. Use the Select button to show a list of the system's currently set environment variables, from which you can select one or more. Once a system variable is in the table, use the Edit button to change or append to its value. You cannot replace or override the STREAMBASE_HOME, PATH, STREAMBASE_JVM_ARGS, or STREAMBASE_LOG_LEVEL variables. The New/Edit Environment Variable dialog contains a Variables button, which is a feature inherited from Eclipse. Use this button to specify the value of an environment variable using one or more Eclipse system variables whose contents are resolved by Eclipse when this configuration is run. Use the Help button on this dialog for further information on each available variable. Starting with release 7.2.6, the list of variables includes ${sb_home}, which is defined as the absolute file system path of the StreamBase installation directory as currently set in Studio's Preferences dialog. Container Start Order 25

26 The Common Tab Test/Debug Guide The settings in the Common tab are common to all Eclipse launch configurations. Use Eclipse Help to learn how more about these options. Save as This option determines where this launch configuration is saved. The default setting, Local file, saves configurations in the workspace metadata area. As an alternative, you can select Shared file to save this launch configuration as a file in one of your Project folders. In this case, the launch configuration is saved as Name.launch, where Name is specified in the Name field. Specify the target Project folder using the Browse button. Display in favorites menu The favorites menu is a list of launch configuration names that appears in the second section of the drop-down menu of the Run and Debug buttons on the toolbar. The first section contains a changing list of recently-run launch configurations. Add a configuration to the favorites section to keep it in the menu, even when other applications have replaced it in the recently-run section. This option performs the same function as the Organize Favorites option in the Run and Debug drop-down menus. Console Encoding Use this section to specify a non-default character encoding for log messages sent to the Console View (and optionally to a standard output file). Standard Input and Output Allocate Console is selected by default. This specifies using the Console view to display logging messages written to standard output and standard error, and to accept text input, if your application requires it. You can disable this option for launch configurations that will run on remote StreamBase Servers. File is unselected by default. To specify that standard output and standard error of your application's run are written to a log file, select the File check box, and enter a path to an existing file. If both The Common Tab 26

27 Allocate Console and File are selected, log messages are written to both places. You must designate the path to the file using an Eclipse-specific syntax. Rather than typing a path, use the Browse Workspace or Browse File System buttons to navigate to an existing file, and let Eclipse write the syntax for you. To navigate to a file in the workspace, you must have created it ahead of time with Studio's File â New â File menu option. If you are familiar with the Eclipse syntax required in this field, you can use the Variables button to build up a path from components. Append is dimmed unless File is checked. Check the Append check box to have log messages appended to the file specified in the File field. By default, the file is overwritten with each run of this launch configuration. Launch in background Use this option to specify that the launching process itself runs in the background. (StreamBase Server is always run in the background.) Checking this box returns control of Studio immediately, without displaying the usual Starting Server dialogs, after you click Run or use the toolbar Run button later to launch this configuration. Launch Configuration Perspective-Change Preferences By default, running or debugging a StreamBase application with a launch configuration automatically switches StreamBase Studio to the SB Test/Debug perspective, after prompting you to confirm. You can specify on the prompt dialog to not ask again. The option to switch perspectives is not stored in the launch configuration itself, but is a global setting for all StreamBase applications, as specified in Studio Preference Settings. Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Launch Configuration Perspective-Change Preferences 27

28 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications > Running Applications from the Command Line Running Applications from the Command Line Contents Starting Applications at the Command Prompt Stopping Applications at the Command Prompt Starting Applications at the Command Prompt Start StreamBase Server at the command prompt with a command like the following: sbd [options] application-file where application-file is the path to a StreamBase file in one of the following formats: EventFlow application file (.sbapp) StreamSQL application file (.ssql) StreamBase deployment file (.sbdeploy) Precompiled archive file (.sbar) Application bundle file (.sbbundle) See the sbd command reference topic for information on the sbd command options. In a UNIX terminal window, you can enter man sbd to view the reference page for sbd. Use the sbadmin or sbc command to send requests and control commands to a running sbd process. Stopping Applications at the Command Prompt Stop a running sbd process, and the StreamBase application it is hosting, by entering the following command from a command prompt: sbadmin shutdown The -u option lets you specify a URI for a running sbd process that uses a non-default port, or that is running on a remote server. See the sbadmin command reference topic for details. Running Applications from the Command Line 28

29 Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Stopping Applications at the Command Prompt 29

30 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Running Applications > Running Multiple StreamBase Applications Running Multiple StreamBase Applications You can run one or more StreamBase Servers (sbd processes) on the same machine. Each sbd process must be configured to use a unique port number. The port number only needs to be unique per machine, so that it can be uniquely addressed with a URI such as sb://machinename: StreamBase Studio can run only one top-level application at a time, although it can also run multiple application modules hosted in containers. However, on the machine where you installed StreamBase, you can run any number of separate sbd processes from the command line on separate ports, each of which hosts a separate StreamBase application. There is no hard-coded limit on the number of sbd processes that can run at the same time on a given machine. The practical limit is constrained by the system resources available on that machine, including memory, address space, and number of processors. The following diagram illustrates the use of multiple sbd processes in different combinations: Running Multiple StreamBase Applications 30

31 Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Running Multiple StreamBase Applications 31

32 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Attaching to Running StreamBase Applications Attaching to Running StreamBase Applications This section describes how to attach to running StreamBase applications from Studio and from the EventFlow Debugger. Contents Attaching to a Running Application Attaching to a Running Server in Debug Mode Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Attaching to Running StreamBase Applications 32

33 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Attaching to Running StreamBase Applications > Attaching to a Running Application Attaching to a Running Application StreamBase Studio supports attaching to a running remote or local StreamBase Server instance to analyze applications on that server. Once connected, you can use most features of the SB Test/Debug perspective on the running application, including: Manual Input view Feed Simulations view Application Input view Application Output view Recordings view Profiler view This feature is implemented as a launch configuration type, Attach to StreamBase Server. You must create and configure a launch configuration of this type to attach to a running server. Follow these steps: 1. Invoke Run â Run Configurations to open the Run Configurations dialog. (The Attach to StreamBase Server configuration type does not appear when you open the Debug Configurations or Trace Configurations dialogs.) 2. In the left pane, select Attach to StreamBase Server and click the New button. This creates a new, empty launch configuration of the selected type. Attaching to a Running Application 33

34 3. In the Main tab: a. In the Name field, replace New_configuration with a name that reflects the hostname to which you will attach. b. In the Target StreamBase Server URI field, enter a URI in the format described in the sburi reference page. The URI must be a valid StreamBase URI for a host with a running application. You can use the sbc -u sburi list command at the command prompt to test connectivity in advance. You can specify a StreamBase Server running on a Windows or UNIX machine. The remote or local host does not need to be configured with SSH access control (as is required when Studio launches application on a remote server). Attaching to a running application requires only client access to the StreamBase URI. The URI does not need to be on a remote host: you can specify a StreamBase Server instance running on the same machine, perhaps at an alternate port, using a URI such as sb://localhost:9999 c. In the Application in default container field, specify or browse for a local copy of the application running on the remote server. This does not need to be an exact match; for example, your local copy might be an earlier or newer version of the same application. 4. Click Apply to save the configuration. 5. Click Run to run the configuration and attach to the specified application. Studio connects to the remote application and announces success in the information bar in the EventFlow Editor canvas: You can confirm connection to the remote server by hovering the mouse over the StreamBase status icon in the lower left corner of the Studio window: 6. To disconnect, press F9 or click the Stop Running Application button in the Studio toolbar. Attaching to a Running Application 34

35 If the attached remote application is connected to a live data input stream or is running a feed simulation, the Application Output view shows the emitted tuples as soon as you make the connection. Once connected, you can: Test/Debug Guide Send individual tuples to one or more streams in the Manual Input view, even if the input streams are already accepting input on the remote host. Send tuples with a feed simulation. Profile the running application's operator and queues in the Profiler view. In general, use the facilities of the SB Test/Debug perspective to examine the running application. To re-run the remote connection, select the name of the configuration in the Run or Run History lists of recently run configurations. Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Attaching to a Running Application 35

36 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Attaching to Running StreamBase Applications > Attaching to a Running Server in Debug Mode Attaching to a Running Server in Debug Mode StreamBase Studio supports attaching to a remote or local instance of StreamBase Server running in debug mode, and using the EventFlow Debugger to debug both StreamBase and custom Java code. Follow these steps: 1. Run your top-level module from the command line on your remote server, using the --debug option of the sbd command. If your application includes custom Java code, provide a server configuration file for the command-line launch that specifies a <java-vm/dir> path to the java-bin directory of your Studio project. For example: sbd -f sbd.sbconf --debug suspend=y myapp.sbapp 2. Use Run â Debug Configurations to open the Debug Configurations dialog. 3. Double-click Attach to StreamBase Server in the left column to create a new attachment configuration. 4. Fill in the StreamBase URI of the host on which you ran the sbd command in step Use the Browse button to locate the local copy of the application running on the remote server. 6. Use the default debug port, 8000, unless you specified a different one with the --debug option. 7. Click Debug. 8. You can now send a tuple and step through its progress in the EventFlow Debugger in the same way as with a local application launched by the Debugger in Studio. Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Attaching to a Running Server in Debug Mode 36

37 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations Using Feed Simulations This section describes how to run StreamBase feed simulations in StreamBase Studio and at the command prompt to send test data to your applications. Contents Feed Simulation Overview Running Feed Simulations Manual Input of Data Using the Feed Simulation Editor Feed Simulation with a JDBC Data Source Feed Simulation Timestamp Options Feed Simulation with Custom File Reader Map to Sub-Fields Option Command Line Feed Simulations Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Using Feed Simulations 37

38 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Feed Simulation Overview Feed Simulation Overview Contents Feed Simulation Characteristics Feed Simulation Topics You can test your StreamBase application by running it, then sending it a collection of test data in the exact format expected by your application's input streams. A collection of such test data is called a feed simulation. Feed Simulation Characteristics Feed simulations have the following characteristics: Feed data can be generated automatically from random data, each field according to its expected data type. You can customize your feed data to contain actual sample data, or realistic data of the exact format expected by your input streams. Your feed data can be formed from a mix of automatically generated and customized data. You can have feed data read from a comma-separated or tab-delimited file, from a table in a JDBC database. You can provide Java code for a custom file reader for non-standard, proprietary, or binary files to serve as the source of feed data. You can ramp up the data rate of your simulated feed to stress-test your application. You can save feed simulations as disk files to be shared among developers or used in different projects. You can edit saved feed simulation files. You can run feed simulations in several ways: In StreamBase Studio, you can send data manually, one tuple at a time. In Studio and on the command line, you can run a generic feed simulation that generates random data. In Studio and on the command line, you can run a saved feed simulation file. Feed Simulation Overview 38

39 Feed Simulation Topics Test/Debug Guide The following topics provide the details of creating, maintaining, and running feed simulations for StreamBase applications. See Running Feed Simulations to learn how to run saved simulations in StreamBase Studio's Feed Simulation view. See Manual Input of Data to learn how to send data to a running application one tuple at a time. See Using the Feed Simulation Editor to learn how to define and save feed simulation files. See Feed Simulation with a JDBC Data Source to learn the details of using a SQL query to a JDBC database as the source of input tuples for a feed simulation. See Feed Simulation Timestamp Options to learn the details of using the Timestamp from column and Include in synchronized timestamp group options of the Feed Simulation Editor. See Feed Simulation with Custom File Reader to learn about writing and specifying custom Java code for reading non-standard, proprietary, or binary files as the source of a stream of input tuples for feed simulations. See Map to Sub-Fields Option to learn about mapping hierarchical data in CSV files. See Command Line Feed Simulations to learn about the command-line utility that runs saved feed simulations at the Windows or UNIX command prompt. Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Feed Simulation Topics 39

40 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Running Feed Simulations Running Feed Simulations Contents Introduction Running Feed Simulations Status Indicators and Display Options Stopping Feed Simulations Related Topics Introduction Use the Feed Simulations view to send test data to a running StreamBase application. This view shows the available feed simulation files with.sbfs extension in your Studio workspace. The Feed Simulations view allows you to run, pause, and stop a feed simulation, and to dynamically change the date rate. A companion to the Feed Simulations view is the Feed Simulation Editor, where you can define new feed simulations or edit existing ones. Open the Feed Simulations Editor by double-clicking a feed simulation file in the Feed Simulations view. See Using the Feed Simulation Editor for more on editing a feed simulation's settings. If your feed simulation takes input from a data file, you can either import an existing one into a project, or create and edit a new one in StreamBase Studio. To create a new one, click File â New â File. In the New File dialog, select the project containing the application, and specify a file name. A new, empty file is opened in a Text Editor, where you can edit and save it. Running Feed Simulations Remember that you must run your StreamBase application before you can use the Feed Simulations view. That is, a StreamBase Server process must be running and hosting the application. In the SB Test/Debug perspective, click the Feed Simulations tab to bring it to the front. If you have many feed simulations in your workspace, you may see a progress bar in the Feed Simulations view as it loads. The StreamBase Server validates each feed simulation in the workspace against the input Running Feed Simulations 40

41 streams defined in the running application. Feed simulations can be shared between applications and projects, and are not tied to a particular application. However, to successfully run a feed simulation, each stream name configured in the feed simulation file must match an input stream name in the running application. The figure below shows a sample Feed Simulations view. The workspace in this example contains feed simulations for two projects: sample_operator and sample_bestbidsandasks. The red X icons ( ) indicate that those feed simulation files do not have streams compatible with the running application as described in Status Indicators and Display Options. In this example, the running application is the BestBidsAsks.sbapp, from the sample applications shipped with StreamBase. To run the NYSE.sbfs feed simulation, select it and click the Run button. Another way to start it is to right-click the feed simulation file name, and select Run Feed Simulation from the drop-down menu. If this feed simulation is defined to log its data to the Application Input view, you see the inbound generated or trace data show there while the feed simulation is running. In addition, when processing results occur on the dequeue streams, that data is shown in the Application Output view. Running Feed Simulations 41

42 In the example above, the Feed Simulations view contains a Status column, showing the number of tuples sent to the application thus far (12,000 in this example). During the running of the feed simulation, you can increase the data rate with the slider in the Feed Simulations view. Click the Stop button to shut down the feed simulation. The server process continues to run until you shut it down separately. Status Indicators and Display Options The Feed Simulations view uses icons to indicate whether the listed feed simulation files are compatible with the running application. A feed simulation with a red icon ( ) or yellow icon ( ) does not indicate that the flagged feed simulation has an inherent problem, only that it is incompatible with the currently running application. The red icon indicates that the simulation was designed for another application that uses different stream names. Select a feed simulation row to see message text below the grid explaining why that simulation cannot run with the current application. Status Indicators and Display Options 42

43 The yellow icon indicates that the input stream name matches an input stream in the currently running application, but one or more fields in the schema are mismatched or have invalid data. The yellow icon error shown in the example above can happen when you have multiple feed simulations in your workspace, and there is a common stream name (in this example, NYSE_Feed), but the schemas are different. That is, the running application has a schema that matches the schema defined in the NYSE.sbfs configuration, but does not match the schema in the NYSE2.sbfs configuration. The Run button is disabled when a red icon feed simulation is selected. It is not disabled for yellow icon simulations. That is, It is possible to run a feed simulation with a yellow warning icon, but it is not recommended. Best practice is to only run feed simulations that have no warning icons. You can temporarily hide feed simulations that have red error icons. In the upper-right corner of the Feed Simulations view, click the down-pointing triangle button and select Hide Feed Simulations with errors. The resulting display is shown here: Status Indicators and Display Options 43

44 The Hide Feed Simulations control is a toggle: select it again to restore the full feed simulations view. Stopping Feed Simulations To stop a running feed simulation, click the Stop button in the Feed Simulations view. The running application continues to run. If you stop the running application, any running feed simulations also stop. When you re-run a stopped feed simulation, StreamBase Studio starts the feed simulation over from the beginning. There is no mechanism to pause and restart a running simulation. Related Topics sbfeedsim Using the Feed Simulation Editor Studio Preference Settings Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Stopping Feed Simulations 44

45 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Manual Input of Data Manual Input of Data Contents Overview Using the Manual Input View Manual Input Error Detection Manual Input of List Fields Manual Input of Tuple Fields Overview Use the Manual Input view to send test data to a running StreamBase application one tuple at a time, and watch the results in the Application Output View and the Application Input View. The command-line equivalent of manual input is to start an sbc enqueue session on an input stream of a running application. You can then type the CSV equivalent of tuples that match that stream's schema. When your application is running, StreamBase Studio queries the running server instance to determine the number of input streams and the schema for each stream. The Manual Input view shows the application's input streams in the Input stream drop-down list, and shows the schema for the currently selected stream. Using the Manual Input View If the running application has more than one input stream, select the stream of interest in the Input Stream drop-down list. If an input stream's schema has field-level descriptions, the description text for a field can be seen as a tooltip when the cursor is in or over that field. Manual Input of Data 45

46 Similarly, if the schema for the stream itself has an entry in its Schema Description field, or if there is a description for a named schema used by the stream, hovering over the information icon for the stream shows that description. The example below is taken from the firstapp.sbapp application shipped with StreamBase in the firstapp sample. The input stream TradesIn is selected as the stream onto which data will be enqueued. This input stream's schema had two fields, symbol and quantity. When you run an application in Studio, the initial values in the Manual Input view's input fields are shown as null, which is a reserved word representing an empty value for this field. This default setting is a convenience when enqueuing a tuple that contains many fields: you can let some fields remain as null, and enter non-null values only for the fields that you know must not be null during your test of downstream components. (For further information on using nulls, see Using Nulls in the Authoring Guide.) Notice the select link on the right of the Stream field. When working with large applications with many input streams in several modules, click this link to open a dialog that shows all input streams in the current project's module search path. Using the Manual Input View 46

47 The following example shows values entered to replace the default nulls: You can enter data for any valid StreamBase data type. See StreamBase Data Types for the range of values allowed for each data type. The following data types have specialized input methods: To enter a timestamp value, press Ctrl+Space with the cursor in a timestamp field (illustrated by the red line in the following example). Studio prompts with an autocompletion message as shown below. Select the message and press Enter to fill the field with the current date and time formatted as a timestamp string. You can edit the string to specify another date or time. Studio calculates the current time value once per application run, the first time you invoke this feature. If you have other timestamp fields in the same Manual Input view, Studio inserts the same timestamp. To enter data in a blob field, the view provides a free-form text field where you can enter multiple lines of text to represent the blob data. To enter data in a field of type list, see Manual Input of List Fields. To enter data in a field of type tuple, see Manual Input of Tuple Fields. The text in parentheses to the right of each field reminds you of the field's data type. When you enter valid values for each of the fields in the tuple, Studio enables the Send Data button. When clicked, a tuple with the specified values is enqueued to the selected input stream of the running application. Depending on the application design, sending the data may or may not produce a result in the Application Output view. Click the Send non-string blank fields as null button to automatically send nulls for fields you do not enter, except string fields. The next time you run or debug an application, this button's on or off state remains as last selected. By default, each tuple you send is logged to the Application Input view. You can disable this by unchecking the Log to Application Input check box. Using the Manual Input View 47

48 Manual Input Error Detection Test/Debug Guide When you enter invalid values in a field, an error icon ( ) appears in two places: To the left of the stream name at the top of the view. To the left of each field that contains an input error. Hover the mouse over the error icon next to a field to show a tooltip that indicates why the field value is incorrect. When there are errors, the view also shows an errors detected link at the top. Hover over the link to see in one place the tooltips for all errors in the view. Click the link to take the cursor to the fields with errors, so you can fix them. In the example below, the value entered for the price field is text, but the field's data type is double. The date field has an extra zero in the year portion of the otherwise valid timestamp entry. Notice that Studio disables the Send Data button until the problems are fixed. Manual Input of List Fields The Manual Input view has two ways to enter data for fields with data type list: The default list input method, with collapsible list fields and a variable number of element fields. An alternate lists as text input method, where you enter an entire list in a single field in array notation. Default List Input Method The default list input method is illustrated for the prices, symbols, and numshares fields in the following example: Manual Input Error Detection 48

49 Notice the following points about the default input method: The prices and symbols fields are shown open, and with two elements each. All lists have a variable number of elements from zero to hundreds. We could have shown as an example six prices elements with four symbols elements. There is no reason the number of elements has to be the same for each list in a tuple. The numshares field is shown collapsed. Collapse a list field by clicking the arrow to the left of its name. Add an element to a list by clicking the green plus sign on the left of the field's element type. Remove an element by clicking the red X on the left of the element name. Set the entire list field to null by clicking the triangle to the left of the green plus sign. Notice that the numshares field is set to null, and the tooltip for the Set this field to null button is shown. (Setting a list field null is not the same as setting one or more elements of the list to null.) See Null Lists for a discussion of null lists compared to empty lists. Lists as Text Input Method Click the Enter Lists as Text button to input lists in array notation with comma-separated values. The lists as text input method is illustrated in the following example, which shows the same values being entered as the example above. Default List Input Method 49

50 Notice the following points about the lists as text input method: Use CSV input style, with the entire list enclosed in brackets and list elements separated by commas. For list(string) fields, you do not need to enclose each element in quotes. Notice that the numshares field is designated null, which is the same action performed by the Set this field to null button in the default input method. Setting a list field to null is not the same as setting one or more list elements to null. To enter a list field with a single null element, enter [null]. A list field with three null elements would be [null, null, null]. See Null Lists for a discussion of null lists compared to empty lists. Manual Input of Tuple Fields The Manual Input view shows fields of type tuple with sub-fields for entering data. The tuple field is collapsible, and you can set the entire tuple to null, or set individual fields to null. Tuple field input is illustrated in the following example, which shows two fields of type tuple, buy_order and sell_order, both having the same schema: Lists as Text Input Method 50

51 Notice the following points about entering tuple field data: Collapse a tuple field by clicking the arrow to the left of its name. Both tuple fields in the example above are shown open. The schema of the tuple field is shown on the right, across from the field's name. Set the entire tuple field to null by clicking the triangle to the left of the field's schema. (Remember that setting a tuple field null is not the same as setting all the sub-fields of the tuple field to null.) The following example shows the first tuple, buy_order, with data, and the second tuple, sell_order, set null. Manual Input of Tuple Fields 51

52 See Null Tuples for a discussion of null tuples compared to empty tuples. Manual Input of Nested Tuples The schema for a stream might have a field of type tuple that contains a sub-field, also of type tuple. This inner tuple sub-field is known as a nested tuple. The following example illustrates manual input of a tuple with a nested tuple. Manual Input of Nested Tuples 52

53 The schema of the stream WrappedTuple has three fields: Field Type time_recd timestamp source string recd_field tuple The schema of the recd_field field has two fields: Field time_sent Type timestamp embedded_trade tuple Finally, the inner tuple field embedded_trade has four fields: Field symbol Type string numshares double pershare custid int Back to Top ^ double Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Manual Input of Nested Tuples 53

54 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Using the Feed Simulation Editor Using the Feed Simulation Editor Contents Introduction Opening the Feed Simulation Editor Graphical and Source Views Overview of the Graphical Presentation Simulation Description Section Simulation Streams Section Generation Method Section Generation Method: Default Generation Method: Data File Generation Method: Customize Fields Generation Method: JDBC Data Source Processing Options Section Description for Each Stream Saving Feed Simulations Related Topics Introduction The Feed Simulation Editor is StreamBase Studio's interface for creating new feed simulations or editing existing ones. See Running Feed Simulations to learn about running your feed simulations. Opening the Feed Simulation Editor Open the Feed Simulation Editor by opening an existing feed simulation file or by creating a new feed simulation. Open an existing feed simulation file as follows: In the SB Authoring perspective, double-click the name of a feed simulation file (with.sbfs extension) in the Package Explorer. In the SB Test/Debug perspective, in the Feed Simulations view, double-click the name of an existing Using the Feed Simulation Editor 54

55 feed simulation file. (A StreamBase application must be running to see its list of feed simulations.) Create a new feed simulation in one of the following ways: From Studio's top-level menu, select File â New â Feed Simulation. Click the New Feed Simulation ( ) button in the toolbar. In the SB Authoring perspective, right-click in the Package Explorer and select New â Feed Simulation. In the SB Test/Debug perspective, right-click in the Feed Simulation view and select New Feed Simulation. In the New StreamBase Feed Simulation dialog: 1. Select an existing project to contain the feed simulation file. 2. Provide a name for your feed simulation, which must be unique within the project. 3. Click Finish. Graphical and Source Views Test/Debug Guide The Feed Simulation Editor contains both graphical and source views of the same feed simulation. Notice that there are two tabs at the bottom of the editor. Use these tabs to switch between graphical and source presentations. The settings and values you enter on one tab are reflected on its partner tab after you save the changes. Opening the Feed Simulation Editor 55

56 Overview of the Graphical Presentation The graphical presentation of the Feed Simulation editor contains the following sections: Simulation Description for entering brief documentation for the overall feed simulation. Simulation Streams for specifying and editing the schema for each input stream in this feed simulation. Generation Method for StreamName specifies how the feed simulation generates or reads data for the selected stream. Processing Options for StreamName specifies the runtime behavior of the simulation for the selected stream. Graphical and Source Views 56

57 Description for StreamName for entering brief documentation for the selected stream. Simulation Description Section Enter a brief description for the overall feed simulation in the top Simulation Description field. (You can document each stream in this simulation with the second description field at the bottom of the editor view.) An example of the Simulation Description section: Test/Debug Guide Simulation Streams Section Use the Simulation Streams section to add, edit, or remove input streams from the current feed simulation. A single feed simulation file can be defined to enqueue data to one or more input streams. The feed simulation definition for each stream can define properties unique to that stream. Note The name you specify for each stream must exactly match an input stream name in the StreamBase application this simulation will run against. It is not enough that the stream's schema matches. The stream name in the simulation file and in the application file must be identical. You can use a feed simulation with more than one application, provided that the input stream names match in those applications. The schema for an input stream in the application does not have to exactly match the schema defined in the feed simulation. For example, you might use a feed simulation to send a subset of data to a stream, relying on default values for the other fields in the schema. Starting with release 7.1.0, you can specify a input stream with an empty schema, as described in Using Empty Schemas. In this case, only the Default generation method is available for such a stream. You can specify a limited set of Processing Options for such streams. When run, a feed simulation for streams with an empty schema sends a series of no-fields tuples to the specified stream. Overview of the Graphical Presentation 57

58 Copy from Stream or Named Schema Use the Copy from Stream/Named Schema button to quickly copy the schema from an existing stream or named schema in your workspace. Clicking this button opens a dialog that shows you a tree list of all modules and interfaces in your current Studio workspace. Click the arrows next to the project folder that you know contains the module or interface of interest. Select the module or interface that contains the stream or named schema of interest. Select that stream or schema and click OK to copy that schema to the Simulation Streams section. Adding a New Stream Test/Debug Guide Let's say a StreamBase application has two input streams: TradesIn and FuturesIn. In the Simulation Streams section, click New Stream to invoke the Add Simulation Stream dialog. In the following example, we have identified the name of the additional input stream in the application: In this dialog, add a schema to the feed simulation using either of the following methods: Use the plus sign ( ) icon to add the fields and their data types and sizes line by line. Use the Copy Schema From Existing Component ( ) icon, which invokes the Copy Schema From dialog described on Copying Schemas. Use this dialog to select an existing schema from a system container stream, or from any module in your Studio workspace. After clicking OK twice, the updated Simulation Streams section looks like this: Copy from Stream or Named Schema 58

59 Note If you have multiple streams defined in the Simulation Streams section, remember to select the target stream before editing the other sections in the feed simulation editor. This is especially true of the Generation Method and Processing Options sections. Generation Method Section Use the Generation Method section to specify how this feed simulation should generate or read data for the selected stream. Be sure to select the stream of interest in the Simulation Streams section before continuing with the Generation Method section. There are four ways to obtain data for this feed simulation: Default: generate uniformly random data of the correct data type for each field in the selected input stream. Data File: read from a file containing delimited values for each field in the selected input stream. Custom: generate random data, with precise control over the type and range of data that goes to each field. JDBC: read data for each field from a table in a JDBC-compliant database. Adding a New Stream 59

60 Generation Method: Default Select the Default option to specify that this feed simulation should generate a default load. This means the following when you run the feed simulation on a running StreamBase application: The feed simulation generates about ten tuples per second for each input stream in your application. (You can adjust the rate in the Generation Options section of the Feed Simulation editor.) Every int, long, double, and timestamp field is assigned a random value from 0 to Every boolean field is assigned true or false. Every string field is filled with characters from a random set of uppercase ASCII characters. Every blob field is assigned 16 bytes of random data, corresponding to uppercase ASCII characters. Every tuple field has all of its subordinate fields filled using the rules above. When you select the Default generation method, the Timestamp from column, Tuple buffer, Prefill Tuple buffer, and Loop on Tuple buffer controls are dimmed. Generation Method: Data File Test/Debug Guide In the Generation Method section, select the Data File option, then click Options. In the Data File Options dialog, specify the path to an existing data file that this feed simulation should use to populate the selected stream. You can also specify which fields in the application's input stream correspond to which columns in the data file. The following image shows the Data File Options dialog filled in for a simple data file example. Generation Method: Default 60

61 The following table describes the options in the Data File Options dialog. Option Default Description Data file None Type the name of an existing delimited value file in your current project, or click Import to import a data file from any location on the local file system into the current Studio project. A preview of the selected file's contents is shown in the File preview section. The feed simulator reads uncompressed plain text CSV files, compressed CSV files, and StreamBase binary files. Compressed and binary files are recognized by their file name extension, as described in the following table:.zip.gz File name Extension Read by Feed Simulator As... CSV file compressed with zip. CSV file compressed with gzip. Generation Method: Data File 61

62 Custom reader None File preview None Lines to skip 0.bz2.bin.bin.gz Any other extension or no extension CSV file compressed with bzip2. (Compression with bzip2 can result in significantly smaller files, but at the cost of slower reading times.) Binary output file generated with the StreamBase Binary File Writer adapter. (You must generate the binary files with the same release of StreamBase currently running the feed simulation.) Binary output file generated with the StreamBase Binary File Writer adapter, with that adapter's compression option enabled. Uncompressed, plain text CSV file. Advanced. Use this button to specify the fully qualified name of a Java class that implements a custom file reader for non-standard, proprietary, or binary files. The specified class must be on the classpath of the JVM running StreamBase Studio (or the JVM running the sbfeedsim command). See Feed Simulation with Custom File Reader for instructions on using this feature. Shows a read-only view of the first few rows of the file specified in the Data file field. The preview updates automatically to reflect selections in other fields of the dialog. Enter an integer number of lines to skip before interpreting lines as data. Use this control to skip a header line without using the First row as header option. This allows you to designate a column as a timestamp value without having to map fields to data by incoming column name. You can also use this control to start reading the data file at an arbitrary preferred starting point. For example, a compressed file of market data might have been recorded starting at 9:00 AM on a trading day, but you want to run your feed simulation starting with the trades that occurred after the exchange opened at 9:30 AM. You must empirically determine the starting point row in the data file, such as by entering a guessed number of rows to skip and checking the timestamp for the new top row in the File preview section or the Column mapping grid. Then enter a lower or higher number of rows to skip until the timestamp in the previews matches the timestamp you seek. First row as header Disabled When the First row as header option is enabled at the same time as this option, the number of lines to skip are counted from the second row of the data file, leaving the first row intact to be interpreted for its label information. When disabled (the default), the column headings of the File preview grid are numbered instead of named. This helps you line up data file columns to field names when using the Map to file control. Enable this option to specify that the selected data file has one row of delimited column headers in the first line of the file. Watch the File preview control and Generation Method: Data File 62

63 the Column mapping grid to see if enabling this option is correct for your data file. When enabled, the column headings of the File preview grid show the heading text read from the file instead of numbers. Column mapping Map data file columns to sub-fields of tuple fields None Disabled This option does not skip the first row, it reads and interprets the first row as labels. If you have enabled both First row as header and Lines to skip options, the first row is read for its header information, then the lines to skip start counting after the first row. If the current input stream (for which you are defining this input data file) has an empty schema, the First row as header control is dimmed and unavailable. It is still possible to specify a data file as input to a stream with an empty schema, such as to specify a sequence of timestamps to use with the Timestamp from column field, as described in Using the Timestamp from Column Feature. Use the Column mapping grid to map fields in the data file to fields in the currently selected input stream. By default, fields are lined up one to one. Use the drop-down list in the Map to file column to specify which data file column should be mapped to each schema field. The Map to file column shows column names from the header row if you enabled the First row as header control, and shows column numbers otherwise. If your data file has a column with timestamp data, you can designate that column for use as a relative timestamp entry. In this case, map the timestamp data column using the Timestamp from column control in the Processing Options section, not with the Column mapping section of this dialog, as described in Using the Timestamp from Column Feature. This option only appears in the dialog when the schema of at least one of the input streams for this feed simulation contains at least one field of type tuple. Use the Map data file columns to sub-fields option to specify that the fields of a flat CSV file are to be mapped one-to-one to the sub-fields of tuple fields. This feature lets Studio read flat CSV files generated manually or generated by non-streambase applications and apply them to schemas that have tuple fields. See Map to Sub-Fields Option for details. Delimiter Comma Specifies the character that delimits fields in your data file. Quote Timestamp format Double "yyyy-mm-dd HH:mm:ss" Specifies the quote character, single or double, that delimits strings within fields in your data file. The default is the double quote. Specifies the incoming format of the timestamp field you have designated as the Timestamp from column field. See the Timestamp from column section for details. The timestamp format pattern uses the time formats of the java.text.simpledateformat class described in the Sun Java Platform SE reference documentation. Generation Method: Data File 63

64 Timestamp builder None Test/Debug Guide Used in conjunction with the Timestamp format field, this control allows you to build a single timestamp from a combination of two or more data file columns plus optional text strings. See Using the Timestamp from Column Feature for details on using this feature. Generation Method: Customize Fields In the Generation Method section, select Custom. In this case, the Timestamp from column, Tuple buffer, Prefill Tuple buffer, and Loop on Tuple buffer controls are dimmed. Click Customize Fields. In the Customize Fields dialog, choose values in the Generation Method column for each field in the schema. For example: The table displays the names and types of the fields available in the selected input stream. The Generation Method column summarizes the way you customize each field. To specify a customized value, select the row and click into the Generation Method cell. Use the drop-down menu for that cell to select from a number of different options, then press Tab or click to move the cursor to the Data Type column. With the cursor moved, one or more data fields appear below the table. In those fields, specify the values relevant for the input stream field. The choices in the drop-down list vary by the data type for each field, as described in the following sections. For Numeric Value Types Constant: a constant numeric value you specify will always be used. Enumerated: selects a numeric value from an enumeration, or set of possible values, each of which may have a different weight. Weights for each value in the enumeration default to 1, meaning they are all equally likely to appear. Use a larger value to weigh a value to appear more often, or lower values to weigh toward less often. Generation Method: Customize Fields 64

65 Incremented: a numeric value that starts at a specified value, then is incremented by a specified value until it is outside a restricted range, defined by minimum and maximum values. If you specify a double as the increment, that increment will be used as is, but the result will be truncated. When the value is outside the range, you can choose whether or not to reset the values and repeat. Random: performs a random walk. That is, a numeric value starts at a particular value, increments by a particular value, and has a restricted range, defined by minimum and maximum values. If you specify a double as the increment, that increment will be used as is, but the result will be truncated. Uniform: a uniformly-generated random number in the range defined by minimum and maximum values, inclusive. Undefined: No custom value is generated for this field. For String Value Types Constant: a constant string value you specify will always be used. RandomString: a string value consisting of a series of random, uppercase characters will be used. You can specify the minimum and maximum length of the generated strings, which defaults to 4 characters for both fields. Enumerated: selects a string value from an enumeration, or set of possible values, each of which may have a different weight. Weights for each value in the enumeration default to 1, meaning they are all equally likely to appear. Use a larger value to weigh a value to appear more often, or lower values to weigh toward less often. Undefined: No custom value is generated for this field. For Boolean Value Types For Boolean fields, a number with absolute value 0.5 or greater is considered true. For Boolean fields, you can select: Constant: set the same specified value for each tuple. Specify true, false, or any of the recognized synonyms for true and false. Uniform: set a range of values to alternate between. Setting a range of 0 to 1 will alternate between true and false, setting the field to true about half the time. Specify different range values to weight the distribution of true and false values towards 1 (true) or towards 0 (false). For example, a range of.25 to 1.0 will set the field to true about twice as often as it sets the field to false. Undefined: No custom value is generated for this field. For List Value Types For fields of type list, the Generation Method column offers the ListSize option. You can specify the minimum and maximum number of elements to be generated, which defaults to 4 elements for both fields. For Tuple Value Types Test/Debug Guide For fields of type tuple, each field of the tuple is shown separately. Each field's row offers the appropriate choice for its data type, as described in previous sections. For Numeric Value Types 65

66 Null Probability Value For all data types, in the Probability of generating null option, you can specify a range or 0.0 to 1.0 to indicate how often to randomly set the field's value to null. For example, entering 0.12 causes the field's value to be null in about 12% of the generated tuples. By default, this field contains the value 0, meaning the feed simulation will not set any field values to null. Generation Method: JDBC Data Source You can configure a feed simulation to use the response from a SQL query to a JDBC database as the source of input tuples for the feed simulation. This configuration of this feature is discussed in Feed Simulation with a JDBC Data Source. Processing Options Section Use the Processing Options section to set the runtime behavior of the selected stream. Be sure to select the stream of interest in the Simulation Streams section before continuing with the Processing Options section. The following table describes the feed simulation processing options. Note that when you specify default values for these settings, those values do not appear in the feed simulation source file. Option Applies to Default Description Log to Application Input All generation methods. Enabled (after an input stream and its schema are selected) If enabled, the generated feed simulation data is shown in the SB Test/Debug perspective's Application Input view. This helps you see the generated data and compare it with the dequeued results in the Application Output view. Specify All 0 (no set Specifies the maximum number of tuples to generate on the Null Probability Value 66

67 maximum tuples Specify maximum time Null string Data rate As fast as possible generation methods. All generation methods. All generation methods. All generation methods. All generation methods. maximum) 0 (no time limit) "null" input stream. For example, to stop generating on the input stream after 20 tuples, enter 20 in the Specify maximum Tuples field. The default setting, zero, means there is no limit. Specifies in seconds the longest time period to run this feed simulation. The default setting, zero, means there is no time limit. Specifies the string used to designate null values when encountered in an incoming data stream (whether read from a data file, read from a JDBC database, or generated as part of customized simulation data). The default value is null, which means that by default, StreamBase sends a null value for that field when it encounters the lowercase letters n u l l in a field in an incoming data stream. You can change the null string value to blank to specify the empty string, or to any value you expect the incoming data to have. For example, a CSV file generated from a MySQL database table dump might contain \N to designate null fields. See Null Handling for further information. 10 tuples per Specifies the number of tuples per second to be sent to the second (after an specified input stream. You can use the up and down arrows input stream to change the value, or click into the text box and enter an and its schema integer. A feed simulation with a specification of zero tuples are selected) per second will not run. Disabled Test/Debug Guide Enables an alternative to specifying a Data Rate value. Enable this option to send the feed simulation data as fast as the host computer allows. The actual data rate at runtime is a factor of the machine's speed, and may be somewhat limited. The feed simulator uses a substantial amount of CPU resources when this option is enabled. Use this setting only if any one of the following is true: Note You are sending a small finite data set. You are sending a large or infinite data set, and the Maximum Tuples and Maximum Time options are set to low values. You are sending a large or infinite data set, you have disabled the Log to Application Input setting, and you have clicked the Disable dequeuing of application output icon ( ) on the Application Output view. Remember that StreamBase Studio is not intended for benchmarking the full performance capabilities of StreamBase Server on production machines. This IDE-managed environment is not suitable for high-speed data Processing Options Section 67

68 rates and high CPU utilization. Allows you to designate the field or column that contains relative timing values to be used to control the pace of sending tuples for the currently selected stream in the Simulation Streams section. Timestamp from column Data File and JDBC generation methods only. Disabled You can designate a column in an incoming JDBC database query, in which case the data type of the column must be double or timestamp. You can also designate a column in an incoming data file, in which case the data type of the column must be double or must contain a string representation of a timestamp. When enabling this option, you must also specify whether to start counting time values from zero or from the value in the first value read. For most cases, select first value read. Include in synchronized timestamp group Tuple buffer Prefill Tuple buffer before starting simulation Prefill and loop on Tuple buffer Data File and JDBC generation methods only. Data File and JDBC generation methods only. Data File and JDBC generation methods only. Data File and JDBC generation methods only. Disabled 40,000 tuples Disabled Disabled With this option enabled, the feed simulation uses the relative times from the designated timestamp column to drive the timing of the feed simulation. See Using the Timestamp from Column Feature for more on this feature. This check box is disabled unless (1) the currently selected stream in the Simulation Streams section uses a Data File or JDBC generation method, and (2) you used the Timestamp from column feature to specify a column in the input schema to use as a source of timestamp information. This option designates the currently selected stream as a member of a group of streams in the current feed simulation file for which StreamBase attempts to coordinate delivery of tuples in timestamp order. See Using Synchronized Timestamp Groups for more on this feature. Specifies the size in tuples of the buffer that holds tuples read from a data file or database query. When used in conjunction with the two Prefill options, the specified size serves as a maximum upper limit of the buffer to be prefilled. Designates whether to fill the entire tuple buffer before sending the first tuple to StreamBase Server. When enabled, specifies that this feed simulation is to run in a loop, starting over and resending the first tuple in the buffer after it reaches the last tuple in the buffer. Use this feature to continuously replay a known, repeatable data set, or to generate a longer data set from a smaller one. See Using the Prefill and Loop on Tuple Buffer Feature for more Note 68

69 information. When you select this check box, the Prefill Tuple buffer before starting simulation check box is automatically checked for you and dimmed. Timestamp from Column Feature The Timestamp from Column feature is discussed in Using the Timestamp from Column Feature. Include in Synchronized Timestamp Group Feature The Include in Synchronized Timestamp Group feature is discussed in Using Synchronized Timestamp Groups. Using the Prefill and Loop on Tuple Buffer Feature With the Prefill and loop on Tuple buffer feature enabled, the feed simulation reads from the specified data file or JDBC data source. The tuple buffer grows to fit the data source, until either all data source rows have been read, or the buffer reaches the limit specified in the Tuple buffer field, or the process runs out of memory (which results in an error). The feed simulation then sends tuples from the tuple buffer from beginning to end. When the last tuple is reached, the feed simulation immediately resends the first tuple in the buffer, then resends subsequent tuples according to the Data rate and other settings for this simulation. If you also specify the Timestamp from column option, the buffer is sent according to the timing information in the timestamp column. When replaying the tuple buffer, the second and subsequent times through the buffer are also replayed with the same timing. Tip Test/Debug Guide To minimize or eliminate the effects of disk access speed when running a feed simulation for benchmarking purposes, you can restrict the size of the buffer with the Tuple buffer option to exactly fit the number of tuples you expect to read from the data source, then use the Prefill and loop on Tuple buffer option. As tuples are resent from the tuple buffer, no fields are modified in the buffer, including any timestamp or sequence number fields. Let's say the first tuple read into the buffer has sequence number 1, and the last tuple has sequence number 20,000. On starting over and rereading the buffer, sequence number 1 is resent to the running StreamBase application. If your application requires monotonically increasing sequence numbers or timestamp fields, then you must adjust the application logic to detect the restart of a looped-buffer feed simulation, to compensate for the restart of sequenced fields. For example, your application's input stream logic might add 20,000 to each sequence number after a loop restart is detected. You can use this technique to generate a larger feed simulation data set from a smaller one. For example, let's say you captured a day's worth of trading data in real time in a CSV data file or JDBC database. You want to use this data set with its real-world timestamps to run a month's worth of trading data for testing or benchmarking purposes. To do so, use the Prefill and loop on Tuple buffer option, and have your application's input logic detect when the last tuple in the buffer is received. Then add one day to the received timestamp of the next set of tuples received. Timestamp from Column Feature 69

70 Null Handling in CSV Files or Generated Data StreamBase handles empty strings encountered in incoming data files, database queries, or feed simulations with custom generated data as follows: If an empty string is encountered in an incoming field whose target schema field is a blob or a string, StreamBase sends an empty string value. If an empty string is encountered in an incoming field whose target schema is other than blob or string, StreamBase sends a null value. You can also set a specific string to be treated as an incoming null field value, as described in the Null string field in the preceding section. Description for Each Stream Use the Description for StreamName field at the bottom of the Editor view to document the currently selected stream. (Use the Description field at the top of the view to document the feed simulation as a whole.) For example: Test/Debug Guide Saving Feed Simulations On the upper tabs of the Feed Simulation Editor, StreamBase Studio displays an asterisk if the file has changed. You cannot toggle between a graphical or source presentation of the feed simulation until you save the file. All feed simulations have the.sbfs file extension, and are saved by default in the currently active Studio project. Related Topics Feed Simulations View Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Null Handling in CSV Files or Generated Data 70

71 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Feed Simulation with a JDBC Data Source Feed Simulation with a JDBC Data Source Contents JDBC Data Source Overview JDBC Driver Setup Environment Variable Alternative Classpath Setup for sbfeedsim JDBC Data Source Options Dialog Related Topics JDBC Data Source Overview You can configure a feed simulation to use the response of a SQL query to a JDBC database as the source of input tuples for the feed simulation. Important The JDBC generation method cannot be used to generate input data for an input stream whose schema contains a field of type list or tuple. Blob fields, however, are supported. To use this option, you must obtain from the database vendor the JAR file that implements the JDBC driver for the target database. When you have downloaded the vendor's JAR file from the vendor's web site, you must either: Install the JAR file in a particular location in the StreamBase installation directory, or Set up the STUDIO_BOOT_CLASSPATH environment variable pointing to a different installed location. Independent of the above steps, you must also include the location of the JDBC JAR file in the standard CLASSPATH environment variable so that command-line sbfeedsim can locate and use it. These configuration requirements are described in more detail in the following sections. Feed Simulation with a JDBC Data Source 71

72 JDBC Driver Setup Test/Debug Guide StreamBase Studio must have access to the database vendor's JDBC JAR file so that it is available when Studio runs a feed simulation. The simplest solution is to copy the JAR file to the STREAMBASE_HOME/jdk/jre/lib/ext directory, which is the lib/ext directory of the JDK bundled with your StreamBase installation. The ability to add a file to the default STREAMBASE_HOME location may require administrator rights, especially on UNIX. Use the following examples as guidelines; the exact location may be different for your installation. Windows Copy the JDBC JAR file to: %STREAMBASE_HOME%\jdk\jre\lib\ext For example: UNIX C:\Program Files\StreamBase Systems\StreamBase.n.m\jdk\jre\lib\ext Copy the JDBC JAR file to: $STREAMBASE_HOME/jdk/jre/lib/ext For example: /opt/streambase/jdk/jre/lib/ext Environment Variable Alternative Your site may require administrator rights in order to add files to C:\Program Files on Windows or /opt on UNIX. In these cases, you may not have permission to copy the vendor's JDBC JAR file to the STREAMBASE_HOME location, as recommended in the previous section. As an alternative, you can specify the environment variable STUDIO_BOOT_CLASSPATH, and point it to the full, absolute path to the vendor's JDBC JAR file. Use the following examples as guidelines. Windows Either set the variable globally using the Windows System control panel, or set the variable in a StreamBase Command Prompt and start the sbstudio.exe process from that command prompt. If the path to the JAR file has any directories with spaces in their names, you must enclose the entire path in single quotes. The following example for Windows XP specifies the path to an Oracle JDBC driver's JAR file. Be sure to specify the JAR file name appropriate for your target database: set STUDIO_BOOT_CLASSPATH='C:\Documents and Settings\sbuser\My Documents\JDBC-Drivers\ojdbc The following example is for Windows Vista or Windows 7: UNIX set STUDIO_BOOT_CLASSPATH=C:\Users\sbuser\Documents\JDBC-Drivers\ojdbc14.jar JDBC Driver Setup 72

73 Set the variable in the terminal environment from which you will launch the sbstudio process. For example: export STUDIO_BOOT_CLASSPATH=/home/sbuser/jdbc-drivers/ojdbc14.jar Classpath Setup for sbfeedsim When you have saved a feed simulation file that specifies data generation from a JDBC data source, you can run that file from the command line independent of Studio, using the sbfeedsim utility. For this to work, the JDBC driver JAR file must be in the classpath. You can set the CLASSPATH environment variable, or use another standard classpath setting method. If you use the CLASSPATH environment variable, append to the existing classpath the full path to the JDBC JAR file, using a command like the following examples: Windows set CLASSPATH=%CLASSPATH%;"C:\Program Files\StreamBase Systems\StreamBase.6.5\jdk\jre\lib\e UNIX export CLASSPATH=$CLASSPATH:/opt/streambase/jdk/jre/lib/ext/ojdbc14.jar JDBC Data Source Options Dialog In the Generation Method section of the Feed Simulation Editor, select the JDBC option, then click Options. In the JDBC Data Source Options dialog, specify the information required to connect to a JDBC-compliant database, and specify a SQL query that will generate a response from the database to populate the selected input stream. This dialog represents the entirety of configuration available when using a JDBC data source for a feed simulation. That is, JDBC configuration parameters such as jdbc-fetch-size in the server configuration file only affect Query operator access to a JDBC Table data structure, and do not affect feed simulation data sources. Environment Variable Alternative 73

74 The table below describes the options in the JDBC Data Source Options dialog. Option Default Description Required field. Select or enter the fully qualified name of the class that implements the JDBC driver for the database you want to use. Driver Class None The drop-down list for this field is populated with example JDBC driver class names. The class name to enter in this field is determined by the actual JDBC driver you obtain from your database vendor and may have changed from these examples. You can select an example driver class name and then edit it, if your driver's class name has changed. Required field. Enter (or select and edit) the JDBC URI that connects to the target database at your site. URI None User Name None Password None SQL None The drop-down list for this field is populated with example JDBC URI strings, with placeholders for site-specific information such as hostname and database name. These example URI strings cannot be used as provided. They must be edited to specify the correct local information for your target database. Optional field. If access to your target database requires it, enter a user name that has the authorization level necessary to run the SQL query specified in the SQL field. Optional field. If access to your target database requires it, enter the password for the user name specified in the previous field. JDBC Data Source Options Dialog 74

75 JDBC Fetch Size Connect timeout 0 (disabled) 15 seconds Related Topics Using the Feed Simulation Editor Feed Simulations View Back to Top ^ Test/Debug Guide Required field. Enter a fully tested and known working SQL statement that returns rows with the columns in the correct order for the schema of the specified input stream. Use the SQL syntax of your target database to construct your SQL statement. StreamBase Systems strongly recommends using your database vendor's command line query tool or a third-party database query tool to develop the SQL query to use in this field. Get the SQL query to a known, working state outside of StreamBase Studio before attempting to use it with a feed simulation. If your SQL SELECT statement returns the right columns in the wrong order, then adjust your SQL statement to return columns that line up by data type with the schema of the specified input stream. Optional field. Specify an integer to designate a JDBC fetch size, which gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed. The fetch size is a standard feature of JDBC drivers, and does not designate a row limit. Some JDBC drivers ignore the fetch size. Consult your database vendor's documentation to learn about methods of determining the optimum fetch size for your target database. Optional field. Specify an integer number of seconds for the JDBC driver to wait for results before declaring an error. Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Related Topics 75

76 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Feed Simulation Timestamp Options Feed Simulation Timestamp Options Contents Introduction Specifying Timestamp Formats Using the Timestamp from Column Feature Using Timestamp from Column with Empty Schemas Using the Timestamp Builder Feature Using Synchronized Timestamp Groups Related Topics Introduction This page describes the timestamp related options of the Feed Simulations Editor. See Using the Feed Simulation Editor for more on the Editor. Specifying Timestamp Formats The feed simulation timestamp features described on this page may require you to specify a timestamp format pattern that determines the interpretation of a date and time specified in string form. StreamBase applies the timestamp format pattern to the time and date strings, and substitutes a StreamBase timestamp data type. The timestamp format pattern uses the time formats of the java.text.simpledateformat class described in the Sun Java Platform SE reference documentation. Once you have determined the timestamp format pattern that describes your data file's date and time information, type that format pattern string into the Timestamp format field of the Data File Options dialog. The following table shows some simple examples of timestamp format patterns. Study the SimpleDateFormat reference page to understand how to describe more complex examples. String Data in the CSV or Text Data File 12/25/09 mm/dd/yy 25-Dec-2009 Timestamp Format Pattern dd-mmm-yyyy Feed Simulation Timestamp Options 76

77 11:27 AM h:mm a 14:26:00 HH:mm:ss 09:47: HH:mm:ss.SSSZ :26:02 yyyy-mm-dd HH:mm:ss Using the Timestamp from Column Feature When using a data file or database query as input for your feed simulation, you can specify one input column as the source of a relative timestamp to use in timing the simulation. If you are using a captured feed as input, and the original feed includes timestamp values, you can use this feature to run your feed simulation with the exact same pace of sending tuples as used by the original feed. When using a JDBC database query as the timestamp column, the data type of the specified timestamp column must be StreamBase timestamp or double. When using a column of a data file as the timestamp column, the data type must be either double or the string representation of a timestamp. When using a column of type double, the numbers represent seconds. When using a string representation of a timestamp, you must also use the Timestamp format field in the Data File Options dialog as described in Specifying Timestamp Formats. When the feed simulation is run, tuples are sent on the schedule specified by the time difference in seconds between each successive timestamp. That is, StreamBase ignores the absolute time and date in the timestamp value, and instead calculates the relative amount of time between the timestamp in each incoming tuple. For example, we might have an input CSV file that starts with the following lines: BA,51.25,09/27/08 16:20:30,100.0 DEAR,25.43,09/27/08 16:20:45,102.0 EPIC,142.85,09/27/08 16:20:50,105.0 FUL,15.76,09/27/08 16:21:00,105.5 GE,25.151,09/27/08 16:21:05,106.0 (This is the same file used as an example in the picture of the Data File Options dialog.) In this example input file, we can designate either column 3 or 4 to be a source of relative timestamps: To use column 3 as the timing source, designate column 3 in the Timestamp from column option, and set the option to start counting from the first value read. You must also specify the string mm/dd/yy HH:mm:ss as the format specifier for the timestamp strings in column 3. In this case, the feed simulator sends the first tuple immediately on startup, then sends the next few tuples on the following schedule: First tuple sent Test/Debug Guide On startup of the feed simulation Second tuple sent 15 seconds later 16:20:45 minus 16:20:30 = 15 seconds Third tuple sent 5 seconds later 16:20:50 minus 16:20:45 = 5 seconds Fourth tuple sent 10 seconds later 16:21:00 minus 16:20:50 = 10 seconds Fifth tuple sent 5 seconds later 16:21:05 minus 16:20:00 = 5 seconds And so on To use column 4 as the timing source, designate column 4 in the Timestamp from column option, and set the option to start counting from the first value read. In this case, the feed simulator sends tuples Specifying Timestamp Formats 77

78 according to the following schedule: First tuple sent On startup of the feed simulation Second tuple sent 2 seconds later 102 minus 100 seconds = 2 seconds Third tuple sent 3 seconds later 105 minus 102 seconds = 3 seconds Fourth tuple sent 1/2 second later minus seconds = 0.5 seconds Fifth tuple sent 1/2 second later minus seconds = 0.5 seconds And so on Using Timestamp from Column with Empty Schemas It is possible to use the Timestamp from column feature even for streams with an empty schema. An input stream might have an empty schema, for example, when used as input to a Query operator configured for Read operations. In this case, sending an no-fields tuple to the stream triggers the Query's Read operation. In this case, any data file used to feed tuples to such a stream cannot have any fields. So why use a data file at all in this case? The data file for an empty schema stream can contain a single column of timestamp or double values that you use with the Timestamp from column feature described in the previous section. In this way, your feed simulation can trigger a Query Read operation on the schedule defined by the differences between values in the timestamp column. Using the Timestamp Builder Feature The Timestamp Builder allows you to build a single timestamp from a combination of one or more data file columns plus optional text strings. Use the Timestamp Builder field in the Data File Options dialog when the incoming data in the CSV or text file has components of a timestamp in different columns. For example, in a CSV file, the date of an event might be in column 3 while the time the event occurred might be in column 5. Follow these steps to use the Timestamp Builder: 1. In the Data File Options dialog, fill in the Timestamp format field with a timestamp format pattern that describes the data found in two or more columns. See Specifying Timestamp Formats. For example, in the figure below, the CSV file has the date in column 3 in the form month/day/year and the time in column 5 in the form hours:minutes, counting hours from 0 to 23. Therefore, you would enter the following timestamp format pattern: MM/dd/yyyy HH:mm. 2. In the Timestamp builder field, enter the column numbers of the CSV file columns that contain your date components, with each column number in braces, in the same order as described in the Timestamp format field. For example, for the date in column 3 and the time in column 5, enter {3} {5}. Tip As you enter your Timestamp builder string, StreamBase interactively attempts to interpret the specified data as a timestamp as described in the Timestamp format field, and shows failures in the top portion of the dialog. Use this as a guide to help you enter a valid Timestamp builder string that is interpreted without errors: Using the Timestamp from Column Feature 78

79 3. Determine which field in the input stream contains the timestamp data. (In the figure below, this is the date field.) In the Map to file column for that field in the Column mapping table, select Timestamp builder from the drop-down list. This tells StreamBase that the complete timestamp is to be generated from more than one column, as described in the Timestamp builder field. The following figure illustrates the use of the Timestamp Builder feature. Tip 79

80 Your Timestamp builder string can also contain static strings to fill in portions of the date or time fields that will be interpreted the same for all generated timestamps. These strings must be accounted for in the timestamp format pattern in the Timestamp format field. For example, your CSV file might have the date in column 1 and the hours, minutes, and seconds each in separate columns starting with column 4. Let's say your Timestamp format field is MM/dd/yyyy HH:mm:ss, which specifies colons between the portions of the time. In this case, add static colons to your Timestamp builder string between the column designators for the time components: {1} {4}:{5}:{6}. You can use static strings in your Timestamp builder string for many purposes. For example, consider the CSV file illustrated in the figure above, but with a Timestamp format string that specifies hours, minutes, and seconds (MM/dd/yyyy HH:mm:ss), and a Timestamp builder string that uses static numbers in the hours portion of the time string: {3} 00:{5}. This would force column 5 to be interpreted as minutes and seconds instead of hours and minutes. You might do this in conjunction with the Timestamp from column feature to replay a feed simulation with the same relative timings as the originally collected CSV file, but significantly faster. Using Synchronized Timestamp Groups The synchronized timestamp group feature relies on and extends the Timestamp from column feature described in Using the Timestamp from Column Feature above. Make sure you read and thoroughly understand that section before attempting to use the Synchronized Timestamp Group feature. The synchronized timestamp group feature lets you designate the currently selected input stream as a member of a group of streams in the current feed simulation file for which StreamBase coordinates delivery of tuples in timestamp order. Use this feature on two or more streams in the same feed simulation file to have the tuples on each stream arrive at StreamBase Server sorted in timestamp order. For example, you might have two CSV files from a market data vendor, one containing quote information, the other containing trade information. Each CSV file has a timestamp field for each tuple. Your feed simulation can read the two CSV files as Data File sources onto two different input streams, with the tuples on the two streams coordinated in timestamp order. The check box that enables this feature is dimmed and unavailable unless: The currently selected stream in the Simulation Streams section specifies a Data File or JDBC generation method. You enabled the Timestamp from column feature and specified a column in the input schema to use as a source of timestamp information. When you select the Include in synchronized timestamp group check box, the using... start time control in the Timestamp from column line automatically changes to first value read, and dims out to prevent you from changing that setting. The synchronized timestamp group feature always uses the first value of the associated data file or JDBC table as the starting point. How Synchronized Timestamp Groups Work The synchronized timestamp group feature does not expect the data sources feeding two input streams to have an equal number of tuples. This feature is not an attempt to force two or more input streams to line up tuple for tuple. Using Synchronized Timestamp Groups 80

81 Instead, this feature's purpose is to compare the timestamps of each tuple on all streams in the synchronized group, and to allow tuples onto each stream in the correct relative order. This feature guarantees that all streams in the synchronization group receive their incoming tuples in monotonically increasing order, and in time order relative to wall clock time. The streams in the synchronized timestamp group do not need to have the same schema, or even similar schemas. The only requirement is that each stream's schema contains a field of monotonically increasing timestamp values (or contains numbers or strings that can be interpreted as timestamp values). For example, consider the following two CSV data files, where the first column contains integer values representing the number of seconds elapsed since the start of the trading day. The first file, quotes.csv, serves as the data source for input stream InQuotes, and contains: 1000,IBM, ,DELL, ,MSFT, ,IBM, ,ORCL, ,DELL, The second file, trades.csv, serves as the data source for input stream InTrades. It contains: 1005,115.54,A8934T34,IBM 1008,15.88,B2399W05,DELL... You have a feed simulation file where both InQuotes and InTrades are listed in the Simulation Streams section. In the Processing Options section: 1. Select InQuotes and InTrades in turn, and set each stream for Timestamp from column For each stream, check the Include in synchronized timestamp group check box. Now when you run your application and send it your feed simulation, the two streams receive the tuples shown in the following table, in the order shown. Relative time order of the feed simulation InQuotes Stream receives InTrades Stream receives start of feed simulation 1000,IBM, seconds 1002,DELL, second 1003,MSFT, seconds 1005,115.54,A8934T34,IBM +1 second 1006,IBM, seconds 1008,ORCL, ,15.88,B2399W05,DELL +2 seconds 1010,DELL,15.59 For the case where both streams had a transaction at timestamp 1008, the tuple might arrive on the two streams in either order. There is no guarantee that any stream in a group receives same-timestamp tuples in any predictable order. The only guarantee is that each stream receives all its tuples in order. How Synchronized Timestamp Groups Work 81

82 Limitations Feed simulations that use the synchronized timestamp group feature cannot use client buffering. This means that: When running such a feed simulation in Studio, the default client buffering normally used is automatically disabled. When running such a feed simulation on the command line with sbfeedsim, you must use the option -b 0 to disable all client buffering. For such feed simulations, do not use the -b option with any other argument, and do not run sbfeedsim without -b 0. As a consequence of disabling client buffering, feed simulations with a synchronized timestamp group may run slower than feed simulations without the group feature. Client buffering is not the same as tuple buffering. The default tuple buffer of 40,000 is still used, both in Studio and with sbfeedsim. You are free to modify the tuple buffering number, and to use the prefill and prefill and loop options in conjunction with a synchronized timestamp group. Related Topics Using the Feed Simulation Editor Feed Simulations View Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Limitations 82

83 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Feed Simulation with Custom File Reader Feed Simulation with Custom File Reader Contents Custom File Reader Overview Required Classpath Setup Custom File Reader Sample Programming Considerations Related Topics Custom File Reader Overview Standard StreamBase provides both Data File and JDBC options to specify a source for a stream of input tuples for testing your StreamBase application in a feed simulation. As an alternative, you can write custom Java file reading code to read non-standard, proprietary, or binary files as the source of your feed simulation's input tuples. StreamBase provides a way to use your custom code instead of its internal CSV-reading code in conjunction with the Feed Simulation Editor's Data File option. In many cases, the column mapping and timestamp conversion options in the Data File Options dialog are flexible enough to adapt or convert any format CSV file for use as a feed simulation input. Most data feeds can be converted to CSV format. You only need to consider writing a custom file reader is cases where your data feed is available in a proprietary or binary file format and converting to CSV would slow down the feed simulation, or when you need to adjust a non-standard CSV file. Your custom file reading code must extend one of the classes in the com.streambase.sb.feedsim package in the StreamBase Client Library, as described on this page. Required Classpath Setup You must place the class file containing your feed simulation custom reader on the classpath of the JVM that runs StreamBase Studio and the sbfeedsim command. The classpath must be configured before you start Studio. Use the STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH environment variable or the Feed Simulation with Custom File Reader 83

84 streambase.feedsim.plugin-classpath Java system property to specify the path to a package directory or JAR file that contains your custom class. If your development environment or your application requires more than one custom file reader class, specify the paths as a list separated with semicolons (Windows) or colons (UNIX). Use the Custom reader button in the Data File Options dialog to specify the fully qualified name of your custom Java class. The dialog shows an error message if it cannot locate your custom class. This error message is resolved with the following steps: 1. Exit Studio. 2. Set the STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH environment variable to the location of your custom file reader class. 3. Restart Studio: If you set the environment variable temporarily in a terminal window or StreamBase Command Prompt, then restart Studio from that command prompt with the sbstudio command. For Windows, if you set the environment variable globally, then restart Studio from its icon. When Studio can locate your custom class, it shows the contents of the selected file in the File preview grid of the Data File Options dialog: Consider the following points when setting your custom reader classpath: While developing your custom reader, set the environment variable to the java-bin directory of your Studio project, so that you can test your class without stopping to generate a JAR file. Studio automatically and silently builds Java source files in a project's java-src directory and places the resulting class files in the java-bin directory. (Studio does not display the java-bin directory in the Package Explorer view.) For example, for Windows: set STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH= C:\Users\sbuser\Documents\StreamBase Studio n.m Workspace\MyCustomReaderProject\java-bin Required Classpath Setup 84

85 For Linux and Bash: export STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH= \ /home/sbuser/streambase Studio n.m Workspace/MyCustomReaderProject/java-bin To share a completed custom reader class with other developers and testers, use Studio to save the java-bin directory as a JAR file. Then specify the path to the JAR file like these examples: For Windows: set STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH=C:\SBappSupport\MyCustomReader.jar For Linux and Bash: export STREAMBASE_FEEDSIM_PLUGIN_CLASSPATH=/home/sbadmin/sbappsupport/MyCustomReader.jar Custom File Reader Sample StreamBase provides a sample showing two implementations of a custom file reader. See Feed Simulation Custom Reader Sample for instructions on loading and running the sample. Programming Considerations Your custom file reader class must extend one of the classes in the com.streambase.sb.feedsim package in the StreamBase Java Client Library. See the Javadoc for these classes in the Java API Documentation. You can extend one of two classes: FeedSimCSVInputStream FeedSimTupleInputStream Extending the FeedSimCSVInputStream Class The class FeedSimCSVInputStream itself extends java.io.inputstream, from which it inherits its read() method. This class reads a specified file and passes its contents to the feed simulation mechanism a character at a time. This is the class to extend in the majority of custom file reader cases. When extending the FeedSimCSVInputStream class, you must: Provide a constructor that provides a string path to a file. Provide an override of the read() method that returns a character. When using a custom reader class that extends the FeedSimCSVInputStream class, the Data File Options dialog uses the read() method of your class to display the contents of your data file in the File preview grid. All the options in the Data File Options dialog and Feed Simulation Editor are available when using a custom reader that extends the FeedSimCSVInputStream class. For an example, see MyFeedSimCSVPlugin.java in the Custom File Reader Sample. Custom File Reader Sample 85

86 Extending the FeedSimTupleInputStream Class The class FeedSimTupleInputStream extends FeedSimCSVInputStream. This class reads a specified file and passes its contents to the feed simulation mechanism a tuple at a time. Extend this class as a higher performance option, but only if the format of your data file is amenable. If your data file format is a CSV-like text format, there is no advantage in converting the data to tuples for the feed simulation. For binary files or very complex text files, however, this class can offer a performance advantage. When extending the FeedSimTupleInputStream class, you must: Provide a constructor and read() override as described for the FeedSimCSVInputStream class. Provide a getschema() method. Provide a readtuple() method. Consider the following limitations when extending the FeedSimTupleInputStream class: The File preview grid of the Data File Options dialog uses the read() method of your class to display the preview, not the readtuple() method. Thus, the file preview may not match what your class delivers to the feed simulation. If you use the Column mapping, Timestamp format, or Timestamp builder features of the Data File Options dialog when using a custom class that extends the FeedSimTupleInputStream class, the feed simulation mechanism falls back to calling the read() method instead of the readtuple() method. This means there is no advantage in using the FeedSimTupleInputStream class when using these features. For an example, see MyFeedSimTuplePlugin.java in the Custom File Reader Sample. Related Topics Using the Feed Simulation Editor Feed Simulation with a JDBC Data Source Feed Simulations View Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Extending the FeedSimTupleInputStream Class 86

87 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Map to Sub-Fields Option Map to Sub-Fields Option Contents Overview Examples Testing CSV Files Overview The Map to sub-fields of tuple fields option appears in two places in StreamBase Studio: In the Data File Options dialog invoked from the Feed Simulation Editor, when specifying options for a CSV data file used as input for a feed simulation. This option only appears in this dialog if the schema of the input stream for this feed simulation includes at least one field of type tuple. In the StreamBase Test editor, when specifying options for a CSV data file used as a data validation file for a unit test. (In StreamBase releases before 7.0, this option was labeled Map to Leaf Fields.) Use the Map to sub-fields option to specify that the fields of a flat CSV file are to be mapped to the sub-fields of tuple fields, not to the tuple fields themselves. This feature lets Studio read flat CSV files generated manually or generated by non-streambase applications such as Microsoft Excel, and apply them to schemas that have fields of type tuple. Do not enable this option for reading any CSV file whose fields have sub-fields, designated by quotes within quotes according to the CSV standard. Do not enable this option for reading hierarchical CSV files generated by StreamBase Studio, or by a StreamBase adapter such as the CSV File Writer Output adapter. For example, let's say StreamBase generates a CSV file to capture data emitted from an output stream, whose schema includes tuple fields. In this case, the generated CSV file is already in the correct format to reflect the nested tuple field structure, and does not need further processing to be recognized as such. Do enable this option for CSV files generated by third-party applications, including Microsoft Excel. These CSV files generally have a flat structure, with each field following the next, each field separated by a comma, Map to Sub-Fields Option 87

88 tab, space, or other delimiter. Despite the flat structure, if the fields of a CSV file are ordered correctly, you can use the Map to sub-fields option to feed or validate a stream with nested tuple fields. Examples The examples in this section will clarify this feature. Test/Debug Guide Let's say we have an input stream that has a two-field schema, T tuple(i1 int, i2 int, i3 int), W tuple(x1 string, x2 string), as illustrated in the figure below. There are two ways to create a CSV file that contains fields that correctly map to this schema: Create a CSV file that contains the expected hierarchy, separated and using quotes within quotes according to CSV standards. Create a flat CSV file that contains the correct number of fields in the right order, then tell StreamBase to interpret this file by mapping to the sub-fields of the two tuples. Using a Hierarchical CSV File The following example shows a hierarchical CSV file that can be used with the schema shown above. In this file, each line maps to two fields, and each field contains sub-fields. To use a CSV file like this example as a feed simulation data file or a unit test validation file in Studio, do not enable the Map to sub-fields option. "100,200,300","alpha,beta" "655,788,499","gamma,delta" "987,765,432","epsilon,tau" Using a Flat CSV File The following example shows a flat CSV file that can also be used with the schema shown above. In this case, you must enable the Map to sub-fields option. 100,200,300,alpha,beta 655,788,499,gamma,delta 987,765,432,epsilon,tau The following image shows the Column mapping grid of a Data File Options dialog that is reading this flat CSV file. The Map data file columns to sub-fields check box is visible and selected. Overview 88

89 Testing CSV Files When specifying a CSV file to use with a feed simulation, the Data File Options dialog shows you graphically how the CSV file will be interpreted. However, StreamBase cannot fully validate the CSV file against the input port's schema until the application is run. When using the Map to sub-fields option for a feed simulation, use the following steps to make sure your CSV file is validated as expected: 1. Run the application. 2. Start the feed simulation that uses the CSV file. 3. Examine the resulting tuples in the Application Input or Application Output views. For the example CSV files above, the following Application Output view shows that the tuples fed to the input stream were interpreted as expected, and all sub-fields were filled with data: The same results are obtained in these two cases: Using the hierarchical CSV file with the Map to sub-fields option disabled. Using the flat CSV file with the Map to sub-fields option enabled. Using a Flat CSV File 89

90 If you see several fields interpreted as null, this indicates that the Map to sub-fields option is enabled for an already-hierarchical CSV file, or that the fields in a flat CSV file do not line up field-for-field with the schema of the input port you are feeding. The following shows an example of an incorrect result. In this case, only the first sub-field of each tuple received input. Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Testing CSV Files 90

91 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Using Feed Simulations > Command Line Feed Simulations Command Line Feed Simulations To run a feed simulation at the command line, you must have a StreamBase Server already running. See Running Applications from the Command Line for instructions. When running StreamBase command-line utilities under Windows, be sure to open and run the commands in a StreamBase Command Prompt. The following example starts a feed simulation with generated data. The sbfeedsim process connects to the running sbd process, reads the schemas of its inputs, and generates appropriate data. The feed simulation stops when either 1000 tuples have been generated or 120 seconds have elapsed, whichever comes first. sbfeedsim --max-tuples max-time 120 The feed simulation runs until one of its limit options reaches its maximum, or until you press Ctrl+C. If you have created and saved a feed simulation file in StreamBase Studio, you can run it against your StreamBase application using the command-line sbfeedsim utility. For example, the following commands on a UNIX system start the StreamBase Server with the Best Bids and Asks sample application, and then runs the NYSE.fsbs feed simulation file provided with that sample. cd studio-workspace/sample_bestbidsandaks sbd -b BestBidsAsks.sbapp sbfeedsim NYSE.fsbs For Windows, open two StreamBase Command Prompt windows and navigate to the workspace location of the Best Bids and Asks sample. Run this command in the first window: sbd BestBidsAsks.sbapp Run this command in the second window: sbfeedsim NYSE.fsbs See Default Installation Directories to determine the location of your Studio workspace directory on UNIX and Windows. If you have feed simulation files created and saved with StreamBase 3.5 or earlier, use the sbfeedsim-old command instead. See the sbfeedsim and sbfeedsim-old reference pages for more on the sbfeedsim options. Command Line Feed Simulations 91

92 Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Command Line Feed Simulations 92

93 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Recording and Playing Back Data Streams Recording and Playing Back Data Streams Contents Using the Recordings View in StreamBase Studio Using Commands to Perform Recordings and Playbacks Related Topics StreamBase allows you to record data enqueued onto one or more input streams, and subsequently play the recorded data back on the same application, at the original speed or an accelerated speed. Among its many uses, this feature allows you to test alternative processing strategies in the StreamBase operators, to see the effect of changes, given the same set of recorded data. This topic explains how you can use the graphical Recordings View in StreamBase Studio, or the corresponding, command-based sbrecord and sbfeedsim features in a terminal windows. Using the Recordings View in StreamBase Studio By default, the Recordings View is located near the upper left corner of the SB Test/Debug perspective. To record tuples using the Recordings View, an application must be running. The sequence of steps is: 1. Start the StreamBase Server for an EventFlow or StreamSQL application. (See Running Applications in Studio for instructions.) 2. In the Recordings View, click the New Recording button. Recording and Playing Back Data Streams 93

94 3. In the New Recording dialog, accept the provided default file name for your recording, or enter a new name. Click OK. 4. The Recording begins when StreamBase detects that data has been enqueued onto one of the input streams of the running application. Enqueue data to an input stream; for example, you can start a Feed Simulation for a specific input stream, as described in Running Feed Simulations. 5. If the application has more input streams, you can enqueue data onto them as well. For each input stream, StreamBase creates a separate comma-separated value (CSV) file. By default, each CSV file is named applicationname-timestamp-streamname.csv in your project folder. 6. When you have finished recording the inbound data, click the Stop button on the Recordings View. 7. You can now replay the recording on the StreamBase application. The Replay speed option's default is to replay the recorded data at 1x, its original speed. You can increase the replay speed by selected one of the provided multiples. Note If the replay involves multiple input streams, it is possible that the multiple streams will not be synchronized with each other. Thus the recorded data from different streams may arrive in the StreamBase application in different order, from one replay to another. Using Commands to Perform Recordings and Playbacks You can use the sbrecord and sbfeedsim commands to perform the recording and playback steps, instead of using StreamBase Studio. You can see a reminder of the syntax of these commands in one of several ways: In man pages on UNIX installations. By the sbrecord -h and sbfeedsim -h commands. By the sbrecord --help and sbfeedsim --help commands. In the sbrecord command reference. In the sbfeedsim command reference. The command-based record and playback steps are: Using the Recordings View in StreamBase Studio 94

95 1. In a terminal window or StreamBase Command Prompt, start a StreamBase Server instance for your application. Before you can record tuples on the input streams, the application must be running. For example: sbd MyApplication.sbapp 2. Use the sbrecord command to start the recording. You can use the --name parameter to set a non-default name for the recording. For example: sbrecord --name feedproc01aug09 The recording begins once StreamBase detects that data has been enqueued onto one of the input streams. By default, each recording is named applicationname-timestamp.sbrec, and is stored in your project's directory in your Studio workspace. 3. Enqueue data to one or more input streams. For example, you can run a StreamBase producer client that enqueues data onto streams, or you can use sbfeedsim: sbfeedsim -f feedproc.sbfs For each input stream in the running application, sbrecord generates a separate comma-separated value (CSV) file. By default, each CSV file is named applicationname-timestamp-streamname.csv. If you used the --name parameter on the sbrecord command, the CSV files are named yourname-streamname.csv. 4. When you have finished enqueuing data, stop the client or the sbfeedsim command that is sending data to the running application. 5. Shut down the StreamBase Server with: sbadmin shutdown 6. You can now replay the recording on the StreamBase application, using the sbfeedsim-old command. Note The note above about multiple input streams applies equally to recordings made with sbrecord. Before you can run sbfeedsim, you must restart the StreamBase Server instance. Related Topics Recordings View sbrecord Command sbfeedsim Command Using the Feed Simulation Editor Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Using Commands to Perform Recordings and Playbacks 95

96 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Debugging Applications Debugging StreamBase Applications This section describes how to debug StreamBase applications and deployment files in StreamBase Studio and from the command prompt. Contents Debugging Overview Using the EventFlow Debugger Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Debugging StreamBase Applications 96

97 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Debugging Applications > Debugging Overview Debugging Overview Contents Debugging an Application with Default Configuration Debugging an Application with a Custom Configuration Debugging Topics To debug a StreamBase application means to run the application in debug mode and use the StreamBase visual debugger to pause your running StreamBase application and step through its processing. See Running Compared to Debugging for a discussion of running versus debugging your applications. The EventFlow debugger works in StreamBase Studio on EventFlow applications. While your application is running, you can stop on breakpoints, see the path currently being taken through your EventFlow diagram, and see the current tuple contents in the Variables view. When using the debugger, start by launching your application in debug mode, as described in the next section. Debugging an Application with Default Configuration StreamBase Studio provides a default launch configuration for running applications in debug mode. This lets you debug right away without stopping to configure a launch configuration. To run an application on the local StreamBase Server using the default debug configuration, first make sure your application is currently active in the Editor view, and is saved and free of typecheck errors. Then use any of these methods: Click the Debug button ( ) on the toolbar. Press F11. Alt+Shift+D, B. That is, press and hold the Alt, Shift, and D keys at the same time. Release them, and immediately press the B key. Select Run â Debug As â StreamBase Application from the top-level menu. (This option only appears in the Run menu if the currently selected Editor view is an EventFlow or StreamSQL editor.) Right-click in the canvas of your application in the Editor view, and select Debug As â StreamBase Application from the context menu. Debugging Overview 97

98 Right-click the name of your top-level application's EventFlow or StreamSQL file in the Package Explorer view, and select Debug As â StreamBase Application from the context menu. Click the down-arrow next to the toolbar's Debug button, and select Debug As â StreamBase Application from the menu. Debugging an Application with a Custom Configuration Use the Debug Configurations dialog to edit, name, and save a debug configuration. See Editing Launch Configurations. To debug an application with a custom configuration, invoke the configuration's name in one of the following ways: Open the Debug Configurations dialog as described in Open a Launch Configuration Dialog. Select the name of the launch configuration in the contents pane on the left, then click the Debug button. Select Run â Debug History â your-config-name from the top-level menu. Click the down-arrow next to the toolbar's Debug button, and select your configuration's name from the menu. Debugging Topics The following topics provide the details of debugging StreamBase applications. See Using the EventFlow Debugger to learn about using the visual debugging features of StreamBase Studio. See Editing Launch Configurations to learn about creating and saving custom debug launch configurations for debugging your StreamBase applications. Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Debugging an Application with Default Configuration 98

99 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Debugging Applications > Using the EventFlow Debugger Using the EventFlow Debugger Contents Introduction Debugger Terminology Using EventFlow Debugger Features Breakpoint Lifecycle Limitations and Suggestions Launching the Debugger in the Background Introduction The StreamBase Studio EventFlow Debugger allows you to see what is going on inside your StreamBase application by inspecting the incremental processing of tuple data. You can suspend a StreamBase application at specified breakpoints, display tuple contents after every operator, and step into and out of modules. If you have used program debuggers before, the EventFlow Debugger's commands and options will be familiar. The EventFlow Debugger can be used to debug EventFlow application modules and related Java code (such as in custom Java operators), but not StreamSQL. It is an interactive inspection facility and is not for general monitoring or use as a trace debugger for diagnosing server behavior. It is only available within a StreamBase Studio session and is not accessible through the StreamBase Client API. The EventFlow Debugger is based on the Eclipse Debug Framework (EDF), and is accessed through the SB Test/Debug perspective. Studio re-uses the Eclipse EDF's Debug, Variables, and Breakpoint views. Because the EventFlow Debugger inherits from a Java code debugger, many of its features do not apply to EventFlow debugging. You can learn about the basic EventFlow Debugger features and operations in the Java Development User Guide that is included in the Studio help system. Look for Concepts â Debugger in that guide. Initiate an EventFlow Debugger session as follows: 1. Start in the SB Authoring perspective. 2. Select the application module of interest and open it in the EventFlow Editor. 3. Set breakpoints on one or more arcs in the module: select an arc, right-click, and select Toggle Breakpoint from the context menu. You can also set breakpoints in Java code for embedded operators Using the EventFlow Debugger 99

100 and adapters. 4. In the Studio toolbar, click the Debug button. This opens the SB Test/Debug perspective and starts the application. 5. Use the Manual Input view to send one tuple to the input streams of interest to your test. Blue highlighting begins to surround each component and arc in the EventFlow canvas as the tuple passes through each component, stopping at the first breakpoint encountered. 6. Use controls in the toolbar of the Debug view to stop, pause, resume, step into, and step over. If the module under test calls another module, the debugger opens it and continues tracking the progress of the tuple, then returns to the calling module. 7. Click the red stop button in the Debug view or the Stop Running Application button in the Studio toolbar to stop debugging and return to the SB Authoring perspective. For the simplest debugging experience for applications with an Extension Point operator, StreamBase Systems recommends temporarily disabling the Run this component in a parallel region option in the Concurrency tab of the Properties view for Extension Point operators. This setting alters the behavior of stepping into a module referenced by an Extension Point, especially when using the Broadcast style of module dispatch. With Broadcast module dispatching, and threading disabled, the debugger steps through each module referenced by the operator. However, with threading enabled, the behavior of the debugger's step-into function is altered, and only the first module is stepped through. If temporarily disabling multi-threading during debugging is not an option, debugging is still possible, though at a more advanced level. For example, while suspended at an input arc of a multi-threaded Extension Point, you can set a breakpoint inside one or more of the modules referenced by the Extension Point. While a subsequent Step Into still does not step into a referenced module (since stepping only occurs within a single thread), the step causes the Extension Point to reference and execute one or more modules in separate threads. If the debugger encounters breakpoints in those referenced modules, the debugger suspends the new thread or threads at each breakpoint's arc. At this point, realize that the Debug view lists multiple threads, and more than one thread can be suspended at any given time. Debugger Terminology Test/Debug Guide Arc Breakpoint A marker that pauses a thread of the program when execution reaches that EventFlow arc. Execution point The paused location in the execution flow of a thread, just before the next statement to be executed. Watchpoint A setting that stops the running program when a variable is written or read. Launch A run configuration loaded into the Debug view that shows the debugged processes, their threads, and the stack traces of suspended threads. Stack Trace The current stack of EventFlow arcs and/or Java frames for a suspended thread. Process A single thread representing an independent executable portion of a running application. VM The Java Virtual Machine interpreting the StreamBase application. Introduction 100

101 Using EventFlow Debugger Features This section introduces the EventFlow Debugger views and commands. There are four views used by the EventFlow Debugger: EventFlow Editor Use this view to create and remove breakpoints, and to follow the flow of execution as it passes through your application. Debug view Includes an entry for the running EventFlow application's process, and threads of that process are listed as children. Use the toolbar buttons of this view to pause and resume your running application, and to step into or step over modules. Breakpoints view Use this view to edit breakpoint properties. Variables view Shows data associated with the currently selected arc or Java frame of a thread in the Debug view. If the Debug view's selection is an EventFlow arc, the Variables view displays the fields of the arc's tuple, as well as dynamic variables of the associated module, and table contents. When the debugger is paused at a breakpoint, you can edit the contents of a variable in the Variables view as described in Variables View below. During an EventFlow debugging session, you: Suspend individual threads of your application using the Debug view. Set breakpoints in the EventFlow Editor. Select a thread's arc or Java frame in the Debug view to inspect in the Variables view. EventFlow Editor The EventFlow Editor shows the flow of the application and marks any arcs containing breakpoints with a symbol. Select an arc and right-click to access the Toggle Breakpoint, Disable Breakpoint, and Enable Breakpoint commands from the context menu. When a thread is suspended, the Debug view entry for that thread can be toggled open to display the thread's stack trace. You can then select an arc or Java frame in that stack trace. If a Java frame is selected, the corresponding Java source is opened in the Editor and the corresponding source line is highlighted, just as is done in traditional Java debugging. If an arc is selected in the Debug view, the corresponding EventFlow module's source is opened in the Editor, and the thread's path to that arc in the module is highlighted in blue, with the selected arc and its target operator highlighted with an outline. Debug View Test/Debug Guide The Debug view allows you to manage the running or debugging of one or more programs in the Studio workspace, including an executing EventFlow program. The Debug view includes a hierarchical list whose top-level elements are the active running or debugged processes. Under the process node for a StreamBase application are listed the threads of that application. In turn, when a debugged thread is suspended, the Debug view displays its stack, which in general consists of a combination of arcs and Java frames (if Java-based operators are involved). Monitors, System Threads, Qualified Names, and Thread Groups are optionally shown in the Debug view. Using EventFlow Debugger Features 101

102 To compare with debugging a Java application, in place of Java statements, arc elements in the stack trace represent StreamBase arcs. For the purpose of controlling program execution via stepping, an arc is treated as a single statement, and modules are treated as functions. Therefore, when suspended at an arc that is an input to a Module Reference or Extension Point, a Step Into command steps (not to the next Java statement, but) to the next traversed arc in the thread, which is typically inside the referenced module. However, a Step Over command at the same point steps (not over the next Java function call, but) over the referenced module, so that execution suspends at the next arc traversed in the thread, after execution has returned from the referenced module. If your EventFlow application includes custom Java code, switching back and forth between debugging your EventFlow application and Java code is automatic while stepping through a module. Use the Toggle debugging mode button on the right side of the Debug view's toolbar to switch modes manually. The commands described in the next sections are available from the right-click context menu when a thread or frame is selected. The most-used command are repeated as the view's toolbar buttons. Commands for Launch Management Test/Debug Guide Resume Allows a process to return to the running state. Suspend Removes a process from the running state for inspection without unloading its threads. Terminate When using the EventFlow Debugger, the red square Terminate button in the Debug view's toolbar ( ) performs the same clean shutdown of the running StreamBase Server instance as the Stop button ( ) in the main Studio toolbar. The terminated debug session cannot be resumed. The SB Test/Debug perspective closes and the previous perspective is restored. Terminate and Relaunch Stops the current process and unloads its threads, then immediately restarts the current module. Disconnect Disconnects the debugger from the running StreamBase Server. This allows a remote launch and its processes to continue running without further interaction with debugging commands. Remove all Terminated, Relaunch Not meaningful for EventFlow debugging. Edit launch-configuration-name In StreamBase Studio, launch configurations are usually named the same as the module being run. This command invokes the Edit Configuration dialog that lets you change some of the configuration settings of the currently running module. For example, you can change the logging level or enable intermediate stream dequeuing while the module is still running. Edit Source Lookup Opens a dialog that lets you specify the disk location for the Java source code for custom operators and adapters in the current module. This location is automatically set by Studio to files in the Java Build Path for the current module's Studio project, so this command is rarely needed for EventFlow debugging. Terminate and Remove, Terminate/Disconnect All Same as Terminate for EventFlow debugging. Debug View 102

103 Commands for Thread Management Step Into Enters a module or executes the next operator in the current stream. The application suspends at the next available execution point, which can be inside a module in a separate EventFlow file, or inside Java code called by a custom Java operator. Step Over Executes the entire function of a module or execute the next operator in the current stream. The application suspends at the next available execution point in the current EventFlow file. Step Return Executes the entire function of the current module or EventFlow application until control passes to the calling EventFlow application. Drop to Frame Select a frame line in the Debug view and invoke Drop to Frame from the toolbar button or context menu. This resets the current execution step back to the selected frame. Use this feature to go backwards in the tuple flow while debugging, which lets you return to inspect the state of an operator you have already stepped through. However, be aware that Drop to Frame cannot undo any side effects of components already stepped through, so it is not a true rewind. Drop to Arc This command appears only in the context menu for a selected arc on the canvas. This is much like Drop to Frame for the selected arc: execution is reset back to the selected arc. Like its frame counterpart, Drop to Arc cannot undo any side effects of components already stepped through. The flow of tuples might pass through the same arc more than once, such as in a loop, or if a module is traversed multiple times. In these cases, Drop to Arc is ambiguous, so Studio drops back to the most recent traversal of the arc. If that is not what you wanted, select the particular arc-frame of interest in the Debug view and use Drop to Frame. Run to Arc This command appears only in the context menu for a selected arc on the canvas, and is only active when debugging is in progress. Like Run to Line in Java debugging, this command is like a fast-forward control. It sets a temporary breakpoint on the selected arc and resumes execution of the module, stopping at the selected arc. If an intervening breakpoints between the current location and the selected arc is hit, the Run to Arc is cancelled. Note Run to Arc is the preferred method for stepping through a sequence of operators. If you repeatedly step into or over too rapidly you can generate errors of various types, which can include messages in the Error Log, problems with data in the Variables view, and in the worst case, a hang of Studio. Therefore, when you know which downstream arc you want to reach, instead of repeatedly single-stepping to get to there, right-click over that arc and choose Run to Arc. Beside saving you time, running to an arc avoids problems that single-stepping too quickly can cause. Use Step Filters, Edit Step Filters, Filter Type, Filter Package Not meaningful for EventFlow debugging. Breakpoints View The Breakpoints view lists the breakpoints you currently have set in your workspace. You can double-click a breakpoint to display its location in the editor (if applicable). You can enable or disable breakpoints, delete them, add new ones, group them by working set, or set hit counts. This view also controls whether suspension Commands for Thread Management 103

104 applies to only one thread, or to the entire VM. (Additional control and inspection is available if the entire VM is suspended.) The commands described in the next sections are available from the toolbar and the right-click context menu. Commands for Breakpoint Management Remove Selected Breakpoints Selected breakpoints are removed. Remove All Breakpoints All breakpoints shown are removed. Show Breakpoints Supported by Selected Target Filters the display to show only the breakpoints which affect the current Launch. Go to File for Breakpoint Opens the EventFlow application in the EventFlow Editor in which the selected breakpoint is defined. Skip All Breakpoints Permits the running application to ignore all breakpoints. Suspend VM Suspends the VM containing the breakpoint. Hit Count Sets the number of tuples that must visit the breakpoint before it suspends the running application. Enable Permits this breakpoint to suspend the VM or thread. Disable Prevent this breakpoint from suspending the VM or thread. Remove Removes this breakpoint from the EventFlow Editor. Remove All Removes all breakpoints from the EventFlow Editor. Select All Select all breakpoints. Copy Copy the text description of a breakpoint into the system clipboard. Paste Paste a breakpoint description into the Breakpoint view (if an appropriate target is available). Import Breakpoints Read defined breakpoints from a file into the Breakpoint view. Export Breakpoints Save the breakpoints to an external file. Breakpoint Properties Edit the behavior of a breakpoint (Enabled, Hit Count, Suspend Thread or VM). Commands for Breakpoint View Management Test/Debug Guide Expand All Show all breakpoints in the workspace. Collapse All Show only the top level hierarchical nodes. Link with Debug View When an application suspends on a breakpoint in the Debug view, the associated breakpoint will be highlighted in the Breakpoints view. Breakpoints View 104

105 Add Java Exception Breakpoint Not meaningful for EventFlow debugging. Group By Specifies nested groupings for the Breakpoints view. Working Sets Define a specific set of breakpoints to appear in the view. Select Default Working Set Select a set of breakpoints to appear in the view. Deselect Default Working Set Reset the view so that no working set is selected. Show Qualified Names Not meaningful for EventFlow debugging. Variables View The Variables view can display information about the tuple associated with an arc or output stream that is selected in the Debug view. The tuple's fields are listed in the Variables view. The view displays data hierarchically, so if a field's type is a tuple or list, it can be expanded to see sub-fields or list elements, respectively. To enable the debugger to step faster, text for fields of type tuple, list and blob is elided in the Variables view for fields containing large amounts of data. You can view the full values of abbreviated tuples by selecting a row. The complete representation of data for that row displays in the Details pane at the bottom of the Variables view. Fields of type blob display either as hex characters or ASCII characters. To control how blob data displays, go to Window â Preferences â StreamBase Studio â Test/Debug. Set the Display blobs as hex or ASCII characters option and specify the Maximum blob characters to display. When the debugger is paused at a breakpoint, you can edit the contents of a variable in the Variables view in two ways: Select the field in the Variables view, right-click, and select Change Value from the context menu. Edit the selected field in the resulting Set Value dialog. Change the value directly in the details pane for the selected field. When you resume stepping through the application, StreamBase uses the newly edited value. The commands described in the next section are available from the toolbar and the right-click context menu. Commands for Variables View Management Show Type Names Not meaningful for EventFlow debugging. Show Logical Structure Not meaningful for EventFlow debugging. Collapse All Show only the top nodes. Layout > Vertical View Orientation Place the list above the detail pane in the view. Test/Debug Guide Commands for Breakpoint View Management 105

106 Layout > Horizontal View Orientation Place the list to the left of the detail pane in the view. Layout > Variables View Only Hide the detail pane. Layout > Show Columns Toggle whether columns appear, providing additional information. Layout > Select Columns Optionally show additional information including Name, Declared Type, Value, Actual Type. Java Not meaningful for EventFlow debugging. Select All Highlights all rows in the hierarchical list. Copy Variables Copy the text description of selected fields to the system clipboard. Find Select fields by name. Change Value Not meaningful for EventFlow debugging. All References Not meaningful for EventFlow debugging. All Instances Not meaningful for EventFlow debugging. New Detail Formatter Not meaningful for EventFlow debugging. Open Declared Type Not meaningful for EventFlow debugging. Open Declared Type Hierarchy Not meaningful for EventFlow debugging. Create Watch Expression Not meaningful for EventFlow debugging. Inspect Add the selected tuple to the Expressions view. Note that Watch Expressions are not supported for StreamBase data types. Breakpoint Lifecycle Test/Debug Guide The EventFlow debugger supports traditional Java breakpoints in embedded Java code and breakpoints set on arcs in EventFlow modules. The breakpoint lifecycle begins when you select arcs in the EventFlow Editor from which to initiate inspection and control of the application. An active arc breakpoint pauses the thread that encounters it and meets its filter criteria. When one or more threads is suspended, each suspended thread's state can be inspected. To create or remove a breakpoint, right-click an arc in the EventFlow Editor and select Toggle Breakpoint from the context menu, or use the keyboard shortcut Ctrl+Shift+B. You can apply breakpoints to multiple selected arcs at the same time. To enable or disable an existing breakpoint, right-click and select Enable Breakpoint or Disable Breakpoint. To set additional properties of a breakpoint, open the breakpoint's Properties dialog. You can do this in two ways: Commands for Variables View Management 106

107 Right-click an arc in the EventFlow Editor and select Breakpoint Properties. In the Breakpoints view, select and right-click a breakpoint, then select Breakpoint Properties. In both cases, a Breakpoint Properties dialog opens like the following example: Breakpoint Lifecycle 107

108 This dialog allows you to limit the suspension of the program at the selected breakpoint. Hit count allows you to specify an integer N for the number of times the breakpoint is hit before the program is suspended. Execution of the program suspends when the breakpoint is hit for the Nth time. Conditional (suspend when true) lets you specify a StreamBase expression that evaluates to a Boolean value. The debugger only suspends execution at that breakpoint if the expression evaluates to true. Breakpoints can be created, enabled, and modified at any time during the editing or debugging session, even while the application is running. However, most useful tasks are done when the application is suspended. Most applications consist of multiple threads that take input from continuously running input sources. Pausing a single process or thread does not pause processes running outside the VM, so input buffers can fill while parts of an application are paused. Take care to control input data while debugging in order to minimize the impact of pausing the main application. Notes on Using Conditional Breakpoints Consider the following points when setting and using conditional breakpoints: In addition to the Breakpoint Properties dialog described above, you can keep a portion of the dialog always open in the Breakpoints view. To do this, use the down-pointing arrow on the top right of the Breakpoints view to open its menu. Select Layout â Automatic to show the Hit Count and Conditional settings for the currently selected breakpoint. The Automatic setting places these controls either horizontally or vertically inside the Breakpoints view, depending on your current perspective layout and window size. You can also select Layout â Vertical or Layout â Horizontal to force the controls to your preferred location. Notes on Using Conditional Breakpoints 108

109 If the specified condition evaluates to null, that is treated as if the expression's value was false. In this case, the debugger does not suspend at that breakpoint's arc. If expression evaluation fails because of a syntax error in the expression, the first time the arc is traversed, the relevant server-side thread suspends, and an error dialog warns of the syntax error. The error dialog gives you the option to display the breakpoint's properties dialog so you can edit the condition expression. Expression evaluation might fail because of a runtime error, such as when an expression causes a divide-by-zero error. In this case, the debugger suspends, and an error dialog explains the problem and provides an opportunity to edit the expression. The condition expression might resolve to a non-boolean value. This case is treated the same as a runtime error, with the resulting error dialog stating "Could not convert the result of the expression to a boolean." Viewing Tuple Contents A tuple's fields are displayed in the Variables view any time the EventFlow application is suspended and the execution point is at an arc or output stream. A thread is typically suspended at an arc or output stream after an arc breakpoint was hit, or after a debugger step operation ends at an EventFlow arc or output stream. Tip Place the Variables view, Debug view, and Application Output view in separate portions of the Studio screen so you can see the contents of all tabs at the same time. The step commands are in the Debug view, but the tuple values are visible in the Variables view. The drop-down menu near the upper right corner of the Variables view, gives you the option to hide null values in tuples fields, list elements, and dynamic variables, as the following screen shot shows. Viewing Tuple Contents 109

110 Click this menu and toggle its items to display or hide null values, as illustrated below. Viewing Query Table Contents You can inspect the contents of Query Tables in the Variables view. Query Tables are shown with their fully qualified paths in the Value column of the Variables view. Hit Counts on Arc Breakpoints As with Hit Counts on Java Breakpoints, you can specify a Hit Count on an arc breakpoint to delay suspension at that arc until a specified number of tuples have traversed (that is, hit) that arc. To specify a hit count, first set the arc breakpoint: right-click an arc and select Toggle Breakpoint. Then in the Breakpoints view, select the corresponding breakpoint, right-click and select Breakpoint Properties. The dialog that appears lets you enable the Hit Count option, and lets you specify the count. Suspension Policy The debugger works in either of two modes based on suspension policy, either to suspend the thread on a breakpoint or suspend the VM on a breakpoint. If the VM is suspended, support services such as Heartbeat are also suppressed. If only the thread is suspended, all other threads continue. This can introduce tuple ordering Tip 110

111 differences in applications with multiple parallel concurrent operators and modules, which would not occur during normal operation. Limitations and Suggestions Test/Debug Guide This section describes limitations of the EventFlow Debugger and suggests tips for making the best use of it. Debugger May Miss Arc Breakpoint During Single-Stepping If you single-step quickly and repeatedly while debugging a StreamBase application, unpredictable errors can occur. The types of errors vary, and may include errors in the Error Log, problems with data in the Variables view, or significant performance degradation in StreamBase Studio. You can work around this issue. If you know which upstream arc at which you'd like to stop, instead of repeatedly single-stepping to get to that arc, right-click over the upstream arc and select Run to Arc. This method is often easier and faster, and it avoids problems from single-stepping too quickly. Cross-Thread Stepping Not Supported Just as Java debugging does not support cross-thread stepping, so with EventFlow debugging. This means the Step or Step Into debugger functions cannot step into a Module Reference or Java operator that is marked to run in a parallel region. To continue debugging, you can temporarily disable the Run this component in a parallel region option in the Concurrency tab of the Properties view for the operator or module reference, though that change cannot be done during a debugging launch. During a debug launch, you can set breakpoints in a referenced module or in the code of a Java operator, and such breakpoints are hit in the separate threads. Single-Step With Caution Single-stepping too quickly by repeatedly pressing F6 as fast as possible can hang or even crash the Debugger. Debugger Paused Without Visible Indications In rare circumstances, the Debugger can pause as if it has hit a breakpoint or has stepped to the next arc in sequence, but neither the Debug view nor the EventFlow canvas shows where or why the Debugger is suspended. In these cases, try double-clicking each thread in the Debug view. One of those threads is likely to reveal a paused thread. Debug View Focus Can Appear to be Lost When Stepping Focus in the Debug view can be intermittently lost when you are stepping through a module at a high speed (for example, by frequently pressing the F6 button). If this occurs, reselect any stack trace element in the execution stack of the Debug View and continue stepping. No Indication of Loop Count When debugging an EventFlow loop, components are highlighted and black-bordered as expected as the tuple proceeds around the loop, but there is no indication of how many times the loop has been traversed. Dynamic Variables in Variables View Might Come From Sequence Operators Be aware that the Sequence operator uses dynamic variables as part of its implementation, so you will see a dynamic variable entry for the sequence IDs generated by Sequence operators in the module you are stepping through. Out of Memory for Very Large Applications, Debugger May Not Start Very large, multi-module applications may report out of memory errors when attempting to start them in the Debugger, especially on 32-bit systems. You can try one or more of the following suggestions to increase the memory available for debugging large applications: Debug using 64-bit Studio on a supported 64-bit platform. Allocate memory to Studio and the application under test separately. For example, using 64-bit Studio on a machine with 8 GB of RAM, try allocating 2 GB to Studio and the rest to Suspension Policy 111

112 your application. Specify Studio's memory usage with a -Xmx2048M setting in the STREAMBASE_STUDIO_VMARGS environment variable. Specify requested and maximum RAM values for the application under test using -Xms and -Xmx settings in the Run and Debug Configuration for the application. See Java VM Memory Settings and Editing Launch Configurations for details. Use the Compile StreamBase application in separate process option, specified in the Advanced Server Options section of the Main tab of the Launch Configuration dialog, as described in this section of the Editing Launch Configurations page. Possible Slow Response under Certain Conditions There are conditions under which the EventFlow Debugger causes StreamBase Server processing to slow down. In particular, this can happen if a stack trace in the Debug view is especially deep because the path to the current execution point arc is very long. Launching the Debugger in the Background The EventFlow Debugger can stop on breakpoints in early-running code, such as in the init() method for custom operators and adapters. However, to support early breakpoints, you must launch the Debugger in the background. Follow these steps: 1. Use Run â Debug Configurations to open the Debug Configurations dialog. 2. In the left side column, select the debug configuration for the module you want to debug. 3. Select the Common tab. 4. Select the check box labeled Launch in background. 5. Click the Debug button to run this configuration now. If the Debugger is launched in the foreground and hits a breakpoint before StreamBase Server container initialization completes, Studio skips that breakpoint and shows a dialog that reminds you of the steps in this section. Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Limitations and Suggestions 112

113 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Intermediate Stream Dequeuing Intermediate Stream Dequeuing Contents Introduction ISD Compared to Other Features What Streams Are Exposed Enabling ISD Intermediate Stream Naming Convention Unconventional Intermediate Stream Names Related Topics Introduction For debugging purposes only, you can dequeue data from intermediate streams in your application, not only from output streams. This debugging feature allows you to examine the value of data at an intermediate point in the application before it has been fully processed and before it has arrived on an output stream. This feature is not to be used for inserting new data into the middle of an application. Caution When intermediate stream dequeuing is enabled, the StreamBase Server instance hosting your application consumes considerably more memory, to the point that you might not be able to successfully launch large applications. Intermediate stream dequeuing (ISD) is disabled by default. You can enable it for a particular run or debug session as described below in Enabling ISD. Note In releases before 6.4.1, ISD was always enabled when running an application in debug mode. This default behavior was changed starting in release to allow debugging of larger applications on memory-limited computers. Intermediate Stream Dequeuing 113

114 ISD Compared to Other Features Do not confuse intermediate stream dequeuing with the always expose feature of input and output streams. Always Expose Stream Feature The General tab of the Properties view for input and output streams has a check box labeled Always expose stream for enqueue or dequeue, respectively. This feature is an attribute of explicit input and output streams that allows you to designate a stream to be en- or dequeuable no matter how deeply its containing module is nested. This feature only affects input and output streams. Intermediate Stream Dequeue ISD is a property setting of the hosting server, and affects all modules in an application. This feature allows you to dequeue not only from output streams, but from the output ports of Map, Filter, Union, or other operators in any module. You can limit the number of operators exposed for dequeuing by specifying a regular expression to match against the names of intermediate streams. For another debugging approach, consider using runtime tracing in place of, or in addition to, ISD. Runtime tracing is described in Runtime Tracing and Creating Trace Files, and includes a comparison table in Tracing Versus Dequeuing. What Streams Are Exposed Test/Debug Guide The following table clarifies what streams are exposed for dequeuing for various settings of intermediate stream dequeuing. ISD Setting Regular Expression Setting Dequeuable Streams For the top-level module only: ISD disabled (the default for run and debug modes) N/A All explicit output streams. Any sub-module output stream individually exposed for dequeuing in its Properties view. For the top-level module and all sub-modules: ISD enabled ISD enabled None specified. Regular expression substring specified. All explicit input and output streams. All explicit error input and error output streams. All intermediate streams in all modules. For the top-level module: All explicit input and output streams. All explicit error input and error output streams. For all sub-modules: Any input, output, error, or intermediate stream whose name ISD Compared to Other Features 114

115 Note matches the supplied regular expression substring. Any sub-module output stream individually exposed for dequeuing in its Properties view. Error output streams are considered intermediate streams for purposes of dequeuing. To allow dequeuing directly from an error output stream, you must enable ISD. Note that this is useful only for debugging: in a well-designed application, your application does not need to read the tuples output on error streams, which are passed automatically to the next higher module, or to the hosting Server. Enabling ISD You can enable intermediate stream dequeuing in several ways: Enable ISD in StreamBase Studio Enable ISD for Command Line sbd Enable ISD in Configuration Files With all methods, you can enable either full or partial ISD: Full ISD means enabling dequeuing on all available intermediate streams in all modules in the application. Partial ISD means limiting the intermediate streams enabled for dequeuing by specifying a regular expression to match against the names of streams. Only intermediate streams whose name matches the expression as a substring are enabled for dequeuing. Enable ISD in StreamBase Studio Follow these steps to expose intermediate streams when running or debugging an application in Studio: 1. Configure and save a run or debug configuration for the application whose intermediate streams you want to expose. Run configurations are described in Editing Launch Configurations. 2. On the Advanced tab of the run configuration dialog, check the Enable Intermediate Stream Dequeue check box. When used without a filter pattern, this specifies full ISD. 3. Optional step, to specify partial ISD. To restrict the number of intermediate streams exposed, enter a regular expression pattern to match against stream names. Intermediate streams whose name matches the provided expression as a substring are exposed for dequeuing; all other intermediate streams are not. 4. Run the application by running the launch configuration. Intermediate streams now appear as selectable output streams in the Application Output view's list of streams. The following illustration shows the Output Stream Selector dialog from the Application Output view for two cases of running the Bollinger Band sample installed with StreamBase. You see the dialog on the left when running with the default run configuration. It shows only the three output streams defined in the application. You see the dialog on the right when running with full intermediate stream dequeuing enabled. It What Streams Are Exposed 115

116 shows the same three output streams plus the output port of all intermediate components as selectable streams. Enable ISD for Command Line sbd To enable intermediate stream dequeuing when starting StreamBase Server from the command line, use sbd with its â-intermediate-stream-dequeue option. For example: sbd --intermediate-stream-dequeue BestBidsAsks.sbapp The --intermediate-stream-dequeue option specifies full ISD, with all intermediate streams enabled. To specify partial ISD for sbd, use the configuration file method described in the next section. Enable ISD in Configuration Files To enable intermediate stream dequeuing in the server configuration file, set the streambase.codegen.intermediate-stream-dequeue property to true, using the jvm-args or sysproperty element of the server configuration file: <sysproperty name="streambase.codegen.intermediate-stream-dequeue" value="true" /> The default for this property is false. To enable partial ISD, specify a regular expression that matches the names of the streams you want to expose. Make this specification with the streambase.codegen.intermediate-stream-dequeue-regex property, using the jvm-args or sysproperty element of the server configuration file: <sysproperty name="streambase.codegen.intermediate-stream-dequeue-regex" value="pattern" /> For example, the regular expression pattern Map\\d allows any intermediate stream whose name contains the string "Map" followed by a single digit to be exposed as a dequeuable stream. All other intermediate streams are not available. Enable ISD in StreamBase Studio 116

117 <sysproperty name="streambase.codegen.intermediate-stream-dequeue-regex" value="map\\d" /> Intermediate Stream Naming Convention With intermediate stream dequeuing enabled, you can dequeue data from any output port of any operator using a stream name of the form: out:operatorname_n Test/Debug Guide where operatorname is the name of the operator and N is the number of the output port for that operator. Consider the Split.sbapp application, which is one of the operator samples installed with the StreamBase kit: To verify the intermediate stream names, run sbc list when the application is running with ISD enabled. For example: container container stream stream stream stream stream stream stream schema operator operator operator operator operator default system INTRUSION_TooManyIPsForUser INTRUSION_TooManyUsersForIP IPandUserLogin out:checkipsforauser_1 out:checkusersinanip_1 out:processipfirst_1 out:processipfirst_2 schema:ipanduserlogin CheckIPsForAUser CheckUsersInAnIP IPCountExceeded ProcessIPFirst UserCountExceeded Compare the EventFlow diagram above with the sbc list output. Notice the following: Intermediate streams are shown because we are running the server with ISD enabled, as described above. Stream names such as out:processipfirst_2 illustrate the default naming convention for intermediate streams. The output ports for the Filter operators, UserCountExceeded and IPCountExceeded, are connected to actual output streams, and thus do not have intermediate streams. Of course, you can dequeue from both intermediate streams and explicit output streams. Enable ISD in Configuration Files 117

118 You can also determine intermediate stream names by viewing the XML source for your EventFlow application. Look for the stream values associated with output ports. For example: <box name="processipfirst" type="split"> <input port="1" stream="ipanduserlogin"/> <output port="1" stream="out:processipfirst_1"/> <output port="2" stream="out:processipfirst_2"/> <param name="output-count" value="2"/>... Unconventional Intermediate Stream Names Intermediate stream names can be found that do not appear to follow the naming convention described in the previous section. When determining the name of intermediate streams from which to dequeue, you must confirm the names as actually used in your application. Confirm with an sbc list command or by viewing the XML source of your EventFlow application. In some cases, the name of an intermediate stream can vary from the default convention, depending on the history of edits to the application. For example, consider following installed sample, AggregateByDim.sbapp: When you run this sample in debug mode (or with JVM argument set as above), an sbc list command returns: container default container system stream AvgPricePSOut stream OutputStream1 stream TradesIn schema schema:tradesin operator Aggregate2Dimensions operator ConvertTimeToSeconds By default, the intermediate stream between the Aggregate2Dimensions and ConvertTimeToSeconds operators would have been named out:aggregate2dimensions_1. But at some point in this application's history, the Aggregate2Dimensions operator was connected to an output stream named OutputStream1, and was subsequently disconnected from that output stream (which was either removed or renamed). Despite the edits, the original output stream name (OutputStream1) defined for Aggregate2Dimensions is still in use. In other words, do not assume that each operator's output stream name always follows the default convention. Related Topics For related information, see: Editing Launch Configurations Intermediate Stream Naming Convention 118

119 Runtime Tracing and Creating Trace Files Using StreamBase Logging Back to top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Related Topics 119

120 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Trace Debugging Trace Debugging This section describes how to create tuple trace files for a running application, and how to use the Trace Debugger. Contents Runtime Tracing and Creating Trace Files Trace Debugging in StreamBase Studio Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Trace Debugging 120

121 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Trace Debugging > Runtime Tracing and Creating Trace Files Runtime Tracing and Creating Trace Files Contents Introduction to Runtime Tracing Setting Up for Runtime Tracing Configuring Runtime Tracing Command Line Tracing Examples Deployment File Example Format of Trace Files Introduction to Runtime Tracing When enabled, runtime tracing writes tuple trace information to StreamBase Server's console or to a trace file, one file per container. If any component or module in your application uses concurrency features, then a separate trace file is created for each separate parallel region. (See Execution Order and Concurrency for more on concurrency.) The default trace information is a single timestamped line per tuple. Each tuple is shown as received at each operator and stream in the application. Trace files can grow quite large very quickly, so runtime tracing is best used only for short bursts, for debugging only, to follow a tuple's progress through an application. You can limit the number of operators and streams that are subject to tracing by using a regular expression to narrow the list of the components of interest. Tracing does not occur unless you set up StreamBase Server for it and tracing is enabled. To set up for tracing, set the system property streambase.codegen.trace-tuples or environment variable STREAMBASE_CODEGEN_TRACE_TUPLES to true. Once set up, tracing is enabled by default when the application starts, unless you disable it at runtime with sbadmin modifycontainer containername trace false. In that case, re-enable tracing with sbadmin modifycontainer containername trace true. Trace files have the extension.sbtrace, or.sbtrace.gz for compressed files. Trace files are stored with UTF-8 encoding, which preserves any Unicode characters in tuple fields in the trace file. Runtime Tracing and Creating Trace Files 121

122 The tracing facility also creates a second file with extension.sbtrace.meta. This file, in XML format, contains the equivalent of the output of sbc describe. This file is automatically used by the Trace Browser perspective when reading and displaying.sbtrace files. Tracing Versus Dequeuing Runtime tracing is similar to, but not the same as using sbc dequeue to dequeue tuples from any stream in your application, including intermediate streams, if enabled (see Intermediate Stream Dequeuing). The following table shows the differences between runtime tracing and dequeuing. Feature Standard Dequeuing Runtime Tracing See tuple output from... Timestamp shown for each tuple Save output to a file or console Starts with StreamBase Server Can see tuple output in StreamBase Studio All explicit streams by default, and all intermediate streams if enabled. No Must redirect console output or use an output adapter. No. It takes time to start the Server, then start a dequeuing client. The first few tuples can be missed. Yes. Setting Up for Runtime Tracing All operators and streams (optionally limited by regular expression selection). Yes Specify with a runtime option. Yes. Even the first few tuples are captured. Yes. Trace files can be opened, searched, and stepped through in Studio's SB Trace Debugger perspective. To use the runtime tracing feature, you must set up StreamBase Server to respond to tracing commands. Use either of the following methods: Set the environment variable STREAMBASE_CODEGEN_TRACE_TUPLES to true. Set the Java system property streambase.codegen.trace-tuples to true. Use the sysproperty or jvm-args directives in the server configuration file (described in StreamBase Server Configuration File XML Reference). For example: <java-vm> <sysproperty name="streambase.codegen.trace-tuples" value="true"/> </java-vm> or Test/Debug Guide <java-vm> <param name="jvm-args" value="-xx:maxpermsize=128m -Xms256m -Xmx512m -Dstreambase.codegen.trace-tuples=true" /> </java-vm> Introduction to Runtime Tracing 122

123 Configuring Runtime Tracing Configure runtime tracing with one of three methods: Test/Debug Guide Use the trace child element of the application element in a StreamBase deployment file. Use the sbadmin addcontainer command at runtime with its --trace* options. Use the sbadmin modifycontainer command at runtime with its trace true and trace false subcommands, and its --trace* options. All three methods have the same three options, as described in the following table: trace element sbadmin option Default Description tracestreampattern="pattern" --tracestreampattern="pattern" none Specify a regular expression patter java.util.regex.pattern. Th to the operators and streams whose na pattern. If not specified, tracing is active for al in the application. By default, trace output is written to S console. To redirect trace output to a f tracefilebase with a basename tracefilebase="basename" --tracefilebase="basename" none There is one trace file generated per m trace file for each parallel region in a to serve as the basename for trace fi generated as basename+containe modules running with parallel executi path becomes part of the trace file nam basename+containername.mod traceoverwrite="true" --traceoverwrite false Tracing also creates a second file with.sbtrace.meta. This file, in XML equivalent of the output of sbc descri automatically used by the Trace Debu reading and displaying.sbtrace fil For the trace deployment file eleme "=false" (default). For the sbadmin without arguments to set the true sta Setting to true causes tracing to ove StreamBase Server startup. Setting to causes new trace information to be ap files for each traced container. tracecompress="true" --tracecompress false For the trace deployment file eleme "=false" (default). For the sbadmin without arguments to set the true sta Configuring Runtime Tracing 123

124 Set to true to specify that the genera compressed with gzip. If true, the tr becomes.sbtrace.gz. Compresse by Studio's Trace Debugger perspectiv When set to false, the output trace f trace line. This slows down performan --tracebuffered=true makes it easier to use tracing as part o tracebuffered="false" true application. When set to true (the de --tracebuffered=false buffered before the trace file is written drag on performance. When using either the <trace> element of the deployment file or sbadmin addcontainer, if the setup conditions are met, tracing is enabled by default when the application starts. You can disable tracing at runtime with sbadmin modifycontainer containername trace false. In that case, re-enable tracing with sbadmin modifycontainer containername trace true. As soon as you disable or stop tracing, all trace files are closed. Command Line Tracing Examples The following sequence of commands modifies the default container to enable tracing, runs a feed simulation, then disables tracing. Trace information is written to a trace file in the current directory named tr_default.sbtrace.gz and to a metadata file named tr_default.sbtrace-meta. The trace file contains tracing information about all operators and streams with the string Bids in their names. UNIX Tracing Example In one terminal window, type: export STREAMBASE_CODEGEN_TRACE_TUPLES=true sbd BestBidsAsks.sbapp In another terminal window, type: sbadmin modifycontainer default trace true --tracefilebase="tr_" \ --tracestreampattern="bids" --traceoverwrite --tracecompress sbfeedsim -q NYSE.sbfs Ctrl+C [to stop the feed simulation] sbadmin modifycontainer default trace false Windows Tracing Example The following shows the same sequence of commands for Windows, using a StreamBase Command Prompt: set STREAMBASE_CODEGEN_TRACE_TUPLES=true start sbd BestBidsAsks.sbapp sbadmin modifycontainer default trace true --tracefilebase="tr_" --tracestreampattern="bids" --traceoverwrite --tracecompress sbfeedsim -q NYSE.sbfs Ctrl+C [to stop the feed simulation] sbadmin modifycontainer default trace false You can uncompress the resulting trace file to examine it in a text editor, or you can load it in compressed form into the Studio Trace Debugger for analysis and interactive trace debugging. When done, type: Test/Debug Guide Command Line Tracing Examples 124

125 sbadmin shutdown Deployment File Example Use the deployment file trace element like this example:... <application container="default" module="bestbidsasks.sbapp"> <trace tracestreampattern="bids" tracefilebase="trace_" traceoverwrite="true" tracecompress="true" /> </application>... This creates a trace file named trace_default.sbtrace.gz, which is overwritten each time the server restarts, and contains tracing information about all operators and streams with the string Bids in their names. Format of Trace Files Trace files consist of one header line (shown below on two lines for publication clarity), plus one trace line per tuple received at each specified operator and stream. The following example shows the start of a trace file for the Best Bids and Asks sample shipped with StreamBase, running in the default container named default: # ms since epoch, tupleid, operator/streamname, tuple-data - \ trace started: :09: ms ,1,default.NYSE_Feed,33409,NLY,14.0,0,16.0,0, ,1,default.out:Update_Bids_and_Asks_1,33409,NLY,14.0,16.0,14.0, ,1,default.out:IsNewBestAsk_1,33409,NLY,14.0,16.0,14.0, ,1,default.BestAsks,33409,NLY, ,1,default.out:IsNewBestBid_1,33409,NLY,14.0,16.0,14.0, ,1,default.BestBids,33409,NLY, ,2,default.NYSE_Feed,33620,NCC,28.5,0,30.5,0, ,2,default.out:Update_Bids_and_Asks_1,33620,NCC,28.5,30.5,28.5, ,2,default.out:IsNewBestAsk_1,33620,NCC,28.5,30.5,28.5, Trace lines consist of at least four fields, described in the following table: Field Description 1 Timestamp in milliseconds representing the interval of time since the epoch. 2 Tuple ID. 3 Fully qualified name of the operator or stream output port. 4 through n The comma-separated fields of the tuple as it exits the port described in field 3. Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Deployment File Example 125

126 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > Trace Debugging > Trace Debugging in StreamBase Studio Trace Debugging in StreamBase Studio Contents Creating a Trace File in Studio Analyzing Trace Files Further Details about Trace Debugging Associating a Trace File with a StreamBase Application Managing Trace Files Restricting the Scope of Traces The StreamBase Studio Trace Debugger perspective provides a way to review and analyze runtime trace files generated while an application was running. You can load and analyze trace files any time after the application has stopped. Trace debugging is a way to study an application's behavior offline by stepping through a trace file, line by line. The file contains rows of tuple values in the order they were processed, and shows you how they change, stream by stream. If your application is open in Studio, when you step through a trace the location of each line you select in the trace debugger is automatically highlighted in the associated EventFlow module. Note Unlike regular debugging, which you do while running in Debug mode, enabling trace debugging generates a trace file of input, output, and intermediate tuples when you execute your application with Run â Trace As. You then send data to the application. When you stop the application, you view the file's contents in the SB Trace Debugger perspective. Creating a Trace File in Studio You can generate trace files for this purpose in two ways: Enable trace file generation for your application as described in Runtime Tracing and Creating Trace Files. When you run your application, either provide it with test data in a feed simulation, or run it with live data. When the application stops, switch to the SB Trace Debugger perspective and open the generated trace file. Launch your application in Studio in Trace Debug mode. Trace Debugging in StreamBase Studio 126

127 Trace Debug mode is a third launch mode for running applications in Studio, in addition to Run and Debug modes. To run In Trace mode: 1. Start with your application's EventFlow, StreamSQL, or Deployment File Editor open, with the application typechecked and free of errors. It is best to have a feed simulation prepared and ready to run. 2. Click the Trace Debugger button in the Studio toolbar, or select Run â Trace As â StreamBase Application. 3. Studio runs the application normally and automatically starts recording a trace file for the main application in the default container. The status button in the lower left corner of the Studio window shows a message like the following: 4. Send tuples to the application manually or with a feed simulation. 5. When you stop the application, Studio automatically switches to the SB Trace Debugger perspective, with the newly-recorded trace file already loaded. You can now use this perspective to follow the progress of a single tuple through your application. You can edit and save Trace Debugger launch configurations in the same way as Run and Debug launch configurations. For more information, see Editing Launch Configurations. Analyzing Trace Files When you open the SB Trace Debugger perspective, you see its Query View with your most recent trace file preloaded. You can select different trace files using the Browse button, reopen the current file by clicking Refresh, and close the file and remove all trace display window tabs that you have created when tracing by clicking Reset Session. The Query View is shown in the following illustration: Creating a Trace File in Studio 127

128 The Query view lets you: Navigate to and open a trace file using the Trace File pane. If you have just performed a trace, its file will automatically open when you stop the application. Browse the currently open trace file using the Browse pane. Set the Time timestamp index slider to view contents of tuples emitted in the time frame it indicates. The default is the first tuple in the file. Click Open Tuple Trace to open a view in which you can trace the path of that tuple through the application. Use the Search pane to search the trace file for a particular field value. The value you specify is a string, not an expression, and it can occur in any tuple field. (The trace file is a CSV file that does not specify data types, so there is no typechecking.) When you select Run Query at the bottom of the pane, you see a list of the tuples that match your query in the Matches tab of that pane. Analyzing Trace Files 128

129 When you Click Open Tuple Trace, you see a representation of tuples for the time frame in the trace results window as a grid of color-coded values. Here is an example of tuples displayed in that window. The Color Legend button on the top right of the display panel describes the color coding for the tuples: Step through input tuples in the file backward and forward with the Previous Tuple and Next Tuple buttons, respectively. Click in a row of tuples to select it, or Shift+click to select a range of rows. To copy the selection to the clipboard, right-click and select Copy Selected Rows. When you select rows, the locations in your application they map to are highlighted, as shown below: Analyzing Trace Files 129

130 Further Details about Trace Debugging Unless you have changed configuration settings (as described in Setting Up for Runtime Tracing), StreamBase Studio is pre-configured for trace debugging. The default location for your trace files is your default temporary directory (normally C:\Users\username\AppData\local\temp). However, you can specify a location for trace files by selecting Run -> Trace Configurations, clicking the Advanced tab, enabling the Specified button under Persistent Data Options and typing or browsing to a path naming the desired location. Associating a Trace File with a StreamBase Application If you are reviewing a trace from a prior session, to enable highlighting of operators, data constructs, and adapters as shown above, you need to associate the trace file with the EventFlow application that created it. To do that, click the Associate button under the navigation buttons in the trace data display window and select the application that was traced from those in your workspace. Further Details about Trace Debugging 130

131 Managing Trace Files Trace files are created in pairs, with the data in a.gz archive and schema information in a.sbtrace-meta XML file. Trace file names take the form <application-name>-<timestamp><container>.sbtrace.gz. The associated meta file name takes the form <application-name>-<timestamp><container>.sbtrace-meta. Trace files persist until you manually delete them. As they can grow quite large for complex applications, especially when run at high data rates, you should monitor the contents of your trace file folder and delete files you no longer need. Restricting the Scope of Traces You can limit the size of trace files and save time when analyzing them by restricting their scope. If you are only interested in debugging certain operators and streams, you can restrict run-time tracing to only include them. Do this by specifying a regular expression in the Advanced tab of the Trace Configurations dialog. Select Filter trace data using this regular expression when tracing, and enter the expression in the text box next to it. Use a regular expression to be parsed by java.util.regex.pattern. For example, if you are interested only in tuples that end up at one of several output streams, you can insert the name of that operator in the Filter trace data using this regular expression when tracing text field. Open the StreamBase sample BestBidsAsk.sbapp and in the Filter trace data using this regular expression when tracing field type BestBids to trace only tuples that represent bids, not asks. Click Trace, and run with the NYSE.sbfs feed simulation. When you stop BestBidsAsks, the SB Trace Debugger perspective displays only tuples arriving at the BestBids output stream. The Search pane in a Query view also restricts what tuples the Trace Debugger displays. When analyzing a trace file, you can view only those tuples that arrive at certain operators with a value that you specify. The matching is applied to all fields unless you restrict it by clicking the twist-down > controls next to operator names in the Query pane and deselecting field names. Search terms are simple strings. Matching values may include substrings containing the search term. Thus, searching for 4.0 will also find values of 14.0, 4.05, etc. Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Managing Trace Files 131

132 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase JUnit Tests StreamBase JUnit Tests StreamBase JUnit tests provide a way to verify that an application module produces the exact expected output for specific input. A StreamBase JUnit test is a Java program that starts StreamBase Server, loads the application module to be tested, sends tuples to the module's input streams, and compares the output emitted from the module's output streams to an expected set of tuples. You can run a StreamBase JUnit test: Interactively, in StreamBase Studio. From the shell command prompt with the sbunit command. As part of an automated application test script. To use the StreamBase JUnit test mechanism, you must know how to write Java code, and should be familiar with Java JUnit in general. See StreamBase JUnit Test Tutorial for a step by step tutorial that walks through each step of generating a StreamBase JUnit test for one of the StreamBase samples. StreamBase also provides the StreamBase Test mechanism, which is comparable to a macro recording and playback facility, and does not require writing Java code. The StreamBase Test system is described in StreamBase Tests (sbtest). Contents Creating and Running StreamBase JUnit Tests Editing StreamBase JUnit Tests StreamBase JUnit Test Tutorial Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us StreamBase JUnit Tests 132

133 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase JUnit Tests > Creating and Running StreamBase JUnit Tests Creating and Running StreamBase JUnit Tests Contents Introduction Library Requirements Generating JUnit Test Code Running StreamBase JUnit Tests in Studio Running StreamBase JUnit Tests at the Command Prompt Testing StreamBase Deployment Files with JUnit Tests Introduction A StreamBase JUnit test is a Java file based on the org.junit and com.streambase.sb.unittest packages. To use the StreamBase JUnit test feature, you must be able to write Java code, and you should be familiar with Java JUnit in general. Use the StreamBase JUnit feature with the following steps: Library Requirements. Before generating your JUnit code, you must prepare your StreamBase project folder by adding libraries to its Java Build Path. Generating JUnit Test Code. Run the StreamBase JUnit wizard to generate starting point Java test code for your project. Editing StreamBase JUnit Tests. Edit and complete the generated test code. Running StreamBase JUnit Tests in Studio. Run the completed JUnit test file in Studio or at the Command Prompt. See StreamBase JUnit Test Tutorial for a step by step tutorial that walks through each step of generating a StreamBase JUnit test for one of the StreamBase samples. Library Requirements When you create a new StreamBase project using the New StreamBase Project wizard, Studio offers the option to include the StreamBase Test Library to the new project's Java Build Path. Creating and Running StreamBase JUnit Tests 133

134 For existing projects, you must add both the JUnit 4 Library and StreamBase Test Library to the project's Java Build Path for your JUnit test code to be generated without errors. You can do this: In advance As part of running the New StreamBase Unit Test Class wizard Afterwards Adding the Libraries in Advance You can add the two necessary libraries to the Java Build Path before running the wizard that generates your StreamBase JUnit test class. Follow these steps: 1. Select the project in the Package Explorer view. 2. Right-click and select Build Path â Add Libraries from the context menu. 3. In the Add Library dialog, select JUnit and click Next. 4. In the next panel, select JUnit 4 from the drop-down list, and click Finish. Library Requirements 134

135 5. Back in the Package Explorer, reselect the same project, right-click, and select Build Path â Add Libraries from the context menu. This time, in the Add Library dialog, select StreamBase Test Support. 6. Click Next, then Finish. Adding the Libraries as Part of the Wizard You can add the two necessary libraries to the Java Build Path as part of running the wizard that generates your StreamBase JUnit test class. Follow these steps: 1. If the New StreamBase Unit Test Class dialog shows a warning that the Test Support library is not on the project's Java build path, select the Click here link. Adding the Libraries in Advance 135

136 2. This opens the Studio Preferences dialog, with the Libraries tab of the Java Build Path panel already open. Hold the Ctrl key and select both JUnit4 and StreamBase Test Support. Click OK. This adds the two libraries to the project's build path, then returns you to the New StreamBase Unit Test Class dialog. Adding the Libraries After Code Generation If the necessary libraries are not on the build path when the New StreamBase JUnit wizard generates your StreamBase JUnit test class, then the generated class shows dozens of errors. Use these steps to correct this situation: Adding the Libraries as Part of the Wizard 136

137 1. Add the two libraries to the project's Java Build Path, using the steps in Adding the Libraries in Advance. 2. With your generated JUnit Java file open in Studio, run Source â Organize Imports from Studio's top-level menu. Generating JUnit Test Code To create a new StreamBase JUnit Test: Test/Debug Guide 1. Prepare the target project folder by adding libraries to its Java Build Path, as described in the previous section. 2. In the Package Explorer, first select the module unit you want to test. Select an EventFlow or StreamSQL module, or a deployment file that specifies the module of interest. 3. Open the New StreamBase Unit Test Class wizard using any of the following methods: Select File â New â StreamBase Unit Test (JUnit). Click the New StreamBase Unit Test ( ) button in the toolbar. Click the drop-down arrow next to the New toolbar button ( ), and select StreamBase Unit Test (JUnit) from the drop-down menu. Right-click anywhere in the Package Explorer, and select New â StreamBase Unit Test (JUnit) from the context menu. With the cursor in any Studio view, press Ctrl+N to open the New dialog. Select StreamBase Unit Test (JUnit) and click Next. With the cursor in any Studio view, press Alt+Shift+N to open the File â New menu at the cursor location. Select âstreambase Unit Test (JUnit) and press Enter. 4. Fill in the fields of the New StreamBase Unit Test Class dialog: Source folder Package Name Application under test If you entered the dialog by first selecting any file in a Package Explorer project, the java-src folder of that project is entered for you. Otherwise, use Browse to navigate to the java-src folder of the project folder of interest. Enter a Java package name that conforms to your site's standards. You can leave this field blank to use the default Java package, but Studio warns against this practice. If the current workspace folder already has one or more Java package directories in place, use Ctrl+Space to select among them from the content completion dialog. Enter a name for your Java test class, using standard Java class naming rules. If you entered the dialog by first selecting an EventFlow or StreamSQL file in the Package Explorer, that file name is filled in for you. If you first selected a deployment file, its name is filled in for you and the Container application field is filled in based on settings in the deployment file. Otherwise, use Browse to select the unit to test: an EventFlow or StreamSQL module, or a StreamBase deployment file. A selection dialog opens, which shows modules and deployment files in the module search path of the project named in the Source folder field. Select the Ignore module search path check box to see all modules in all project folders. Adding the Libraries After Code Generation 137

138 Container application This field only appears when you specify a deployment file in the Application under test field. This field shows a drop-down list of the modules specified in the deployment file. Select the container and module that you want to create the test for. 5. When done, press Finish. 6. Studio inspects the specified module or deployment file named and generates a StreamBase JUnit Java test file. The new file is opened for editing in Studio. The generated Java test file is a starting point only, with incomplete sections. You must customize the file by adding the exact tuple content you want to send to the module's input streams, and the exact tuple content you expect to see on the module's output streams. See Editing StreamBase JUnit Tests for a discussing of the required and optional edits for your JUnit test file. Running StreamBase JUnit Tests in Studio Run a StreamBase JUnit test like any other Java code in Studio: with the StreamBase JUnit test file selected, click the Run button in the Studio toolbar. You can also open the Run Configurations dialog to create and edit a launch configuration for your test file. The StreamBase JUnit mechanism performs the following actions: Starts StreamBase Server in the background. Directs StreamBase Server to load and run the application module to be tested. Sends one or more tuples to input streams as specified in the test file. Monitors the specified output streams of the module for output tuples. Compares the output tuples to the expected output as specified in the test. Shuts down StreamBase Server. Reports success or failure. Success or failure is shown in the JUnit view, which shares the bottom pane of the SB Authoring perspective. A successful run shows with a long green bar and statistics, like the following example: A failing run shows a stack trace that includes the tuples that failed to match the expected output: Generating JUnit Test Code 138

139 Running StreamBase JUnit Tests at the Command Prompt Use the sbunit command to run a StreamBase JUnit test class at the command prompt. To use the sbunit command, you must have: A StreamBase Junit test file, developed and tested in StreamBase Studio, and known to produce results. A server configuration file for the project, with the following minimum setting: A <dir> child element of the <java-vm> element, specifying the path to the package directory containing the test class. For test classes developed in StreamBase Studio, the package directory is usually the java-bin folder of the project folder in your Studio workspace. The minimum server configuration file for running sbunit is like this example: <?xml version="1.0"?> <streambase-configuration> <java-vm> <dir path="./java-bin" /> </java-vm> </streambase-configuration> Run the test at the command prompt with a command like the following: sbunit -f sbd.sbconf com.example.sbjunit.testname See sbunit for more on the sbunit command. Testing StreamBase Deployment Files with JUnit Tests If you specified a StreamBase deployment file in the Application under test field in the test-generating wizard, Running StreamBase JUnit Tests in Studio 139

140 the generated JUnit test file includes a loaddeploy() line (instead of loadapp()), like the following example: server.loaddeploy("deployfilename.sbdeploy"); The specified deployment file must have an <application> element with module and container attributes. The deployment file can also specify container connections, module parameters, and extension point external modules, as required. The following deployment file example shows the minimum configuration that can be run with StreamBase JUnit: <?xml version="1.0" encoding="utf-8"?> <deploy xmlns:xsi=" xsi:nonamespaceschemalocation=" <runtime> <application container="default" module="bestbidsasks.sbapp"/> </runtime> </deploy> Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Testing StreamBase Deployment Files with JUnit Tests 140

141 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase JUnit Tests > Editing StreamBase JUnit Tests Editing StreamBase JUnit Tests Contents Overview Required Edits Using Non-Default Container Names in Tests Using Non-Default Container Names with Deployment Files Further Options Overview Java test files generated by the StreamBase JUnit wizard are incomplete and must be edited to become a useful test. This topic provides guidelines for completing the edit of your generated StreamBase JUnit test files. Required Edits A generated JUnit test file includes: One example tuple using generated test data for one input stream of the module to be tested. A placeholder for an Expecter object that defines a tuple you expect to see on the module's output ports. You must make the following edits: 1. Fill in the generated example tuple for the first input stream with actual data. Run your module in Studio to send in one or more tuples, and record the output expected from those test tuples. Then fill in the exact same test tuple data to replace the generated data in the test file. 2. The wizard generated an example input tuple only for the first input stream of your module. First means the first input stream found in a top-down scan of the XML source of an EventFlow or StreamSQL module. If your module has more than one input stream, and those streams are relevant to the test you want to run, then you must add one or more input tuples for those streams as well. 3. One Expecter section is generated, showing a getdequeuer() method running on the generic output stream named OutputStream1. Replace OutputStream1 with the name of an actual Editing StreamBase JUnit Tests 141

142 output stream in the module to be tested. 4. Fill in the Object array marked "[REPLACE THIS]" with a comma-separated list of the tuple data you expect to see on this output stream, given the input tuple. 5. If the module to be tested has more than one output stream, add an Expecter section for each output stream. Tip When using the JSONTupleMaker class to format tuples for enqueuing and dequeuing, use the Copy as JSON feature in the context menu of the Application Output and Application Input views to quickly generate tuples in the correct format. Similarly, use the same context menu's Copy as CSV feature with the CSVTupleMaker class. Using Non-Default Container Names in Tests The generated JUnit test file includes a loadapp() line like the following example. This line is responsible for loading the specified application module into the test StreamBase Server. server.loadapp("appname.sbapp"); The line as generated loads the named module into a container named default. In this case, the StreamBase paths to the names of streams in your test code can be simple names without a container name, because StreamBase presumes a container named default if you don't specify a container. Thus, for a test of the Best Bids and Asks sample included with StreamBase, when using the default container, the following test code fragments are valid: server.getenqueuer("nyse_feed").enqueue( new Expecter(server.getDequeuer("BestAsks")) You may have an application-specific reason to load your test module into a container other than default. To do this in your test code, add a container name as a second argument to the loadapp method, like the following example: server.loadapp("appname.sbapp", "testcontainer"); In this case, you must make sure that all StreamBase paths in the test file include the container name. For example: server.getenqueuer("testcontainer.nyse_feed").enqueue( new Expecter(server.getDequeuer("testcontainer.BestAsks")) Using Non-Default Container Names with Deployment Files The same rule applies if you specify a non-default container name in a deployment file: you must make sure all StreamBase paths in the test file include the container name. For example, you might generate a test file to run the following simple deployment file, and you select the module in thedeploycontainer container in the Container application field of the New Unit Test Class Required Edits 142

143 dialog: <?xml version="1.0" encoding="utf-8"?> <deploy xmlns:xsi=" xsi:nonamespaceschemalocation=" <runtime> <application container="default" module="bestbidsasks.sbapp"/> <application container="deploycontainer" module="bestbidsalt.sbapp"/> </runtime> </deploy> In this case, the JUnit wizard automatically places the container name in the generated getenqueuer() line: server.getenqueuer("deploycontainer.nyse_feed").enqueue(... It is up to you to include the container name when you add further getenqueuer() and getdequeuer() lines. For example:... new Expecter(server.getDequeuer("deploycontainer.BestAsks"))... new Expecter(server.getDequeuer("deploycontainer.BestBids"))... Further Options You are not limited to the code generated by the wizard. For example, you might prefer to enqueue your input tuples constructed with ObjectArrayTupleMaker.MAKER instead of JSONSingleQuotesTupleMaker.MAKER. If you need your test to reset the state of the module under test in preparation for further tests, your test code can include a passage like the following: stopcontainers(); startcontainers(); Use the Javadoc documentation for the com.streambase.sb.unittest package as a guide to the test features available. See Java API Documentation. Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Using Non-Default Container Names with Deployment Files 143

144 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase JUnit Tests > StreamBase JUnit Test Tutorial StreamBase JUnit Test Tutorial Contents Steps to Generate and Run a JUnit Test Debugging StreamBase JUnit Tests Steps to Generate and Run a JUnit Test This section steps through the process of generating and running a simple StreamBase JUnit test for the Best Bids and Asks sample delivered with StreamBase. 1. In StreamBase Studio, load the Best Bids and Asks sample: From the top menu, select File â Load StreamBase Sample. Select bestbidsandasks from the Applications category. Click OK. Studio creates a project folder named sample_bestbidsandasks. If you loaded the sample previously, Studio creates sample_bestbidsandasks1. 2. If you make edits, make sure the application is free of typecheck errors. 3. Click the Run button. This opens the SB Test/Debug perspective and runs the application. 4. Use the Manual Input view to send the following input tuple: time_int symbol IBM bid_price bid_size 3000 ask_price ask_size 2000 sequence Keep a record of the exact input tuple you sent, and of the results you see on output streams in the Application Output view: StreamBase JUnit Test Tutorial 144

145 Tip To save typing time later, perform the following steps: a. In the Application Output view, right-click and disable Show time column. b. Select the BestAsks line in the view, right-click, and select Copy as CSV. Paste this lines into a text editor and save it. c. Switch to the Application Input view. Select the single input line, right-click and select Copy as JSON. Paste the results in the text editor session. You will use these CSV and JSON-encoded versions of the input and output later in your StreamBase JUnit code. 6. Press F9 or click the Stop Running Application button. This returns you to the SB Authoring perspective. 7. Select the BestBidsAsks.sbapp EventFlow module in the Package Explorer view. Right-click, and select New â StreamBase Unit Test (JUnit) from the context menu. 8. Fill in the fields of the New StreamBase Unit Test Class dialog: Source folder Package Name The sample_bestbidsandasks/java-src folder is entered for you. Enter com.example.sbjunit, or use a different Java package name that conforms to your site's standards. Enter BBA_test1. This serves as the name for your Java test class and the Java source file to be created. Application under The BestBidsAsks.sbapp file name is entered for you. (If not, use Browse to test select it.) 9. If the New StreamBase Unit Test Class dialog shows a warning that the Test Support library is not on the project's Java build path, then select the Click here link. Steps to Generate and Run a JUnit Test 145

146 10. This opens the Properties dialog for the current project, with the Libraries tab of the Java Build Path panel already open. Hold the Ctrl key and select both JUnit4 and StreamBase Test Support. Click OK. This adds the two libraries to the project's build path, then returns you to the New StreamBase Unit Test Class dialog. (Alternative: if you select only JUnit4, StreamBase automatically includes the required StreamBase Client and Test Support libraries.) 11. Click Finish. Studio generates the JUnit test file named BBA_test1.java, and opens the file in Studio. 12. Inspect the BBA_test1.java test file. Notice that: The wizard inspected the specified EventFlow file and found one input stream. It generated one example tuple for this input stream, using generated values that match the data type of Tip 146

147 each field in the input stream's schema. The wizard did not generate an Expecter object for either of the module's output streams. Instead, it generated a generic Expecter for an output stream with default name OutputStream1, and a "REPLACE THIS" reminder that you must name your output streams and provide the expected output tuple for each. 13. In the BBA_test1.java test file, replace the generated values in the inputtupleasjsonstring string with the values you noted in step 5. The following example shows the string broken into three pieces for publication clarity. It is usually simpler to type one long JSON string, with fields separated by commas, and field names single-quoted: String inputtupleasjsonstring = "{'bid_price':125.00,'time_int': ," + "'ask_price':139.45,'symbol':'ibm','bid_size':3000," + "'sequence':467,'ask_size':2000}"; Tip If you used the Copy as JSON feature in step 5, while the copied input strings is still in your text editor, convert all double quotes to single quotes. Then copy the JSON string from the Application Input view to your Java code, carefully copying the entire brace-delimited line and placing it between double quotes in the JUnit code. 14. In the BBA_test1.java test file, locate the line that begins with Expecter. Replace OutputStream1 in this line with the name of the application's first output stream, BestAsks. 15. Replace the REPLACE THIS string for the Object[] with a comma-separated list of the fields you recorded in step 5: Expecter expecter = new Expecter(server.getDequeuer("BestAsks")); expecter.expect(objectarraytuplemaker.maker, new Object[] { , "IBM", }); Tip Test/Debug Guide If you used the Copy as CSV feature in step 5, in the copied line, put double quotes around IBM. Now copy that line from your text editor to the JUnit code. 16. Save the BBA_test1.java test file, and click the Run button in the Studio toolbar. 17. The first time the test is run, Studio prompts you to specify whether this is a standard JUnit test for Java code, or a StreamBase JUnit test for a StreamBase module. Select StreamBase Unit Test and click OK. Tip 147

148 18. Studio runs the test file. The test starts StreamBase Server, loads the BestBidsAsks.sbapp module, sends the specified tuple to input stream NYSE_Feed, and compares the output emitted from output stream BestAsks to the tuple you specified for the Expecter object. The JUnit view is brought to the foreground in the bottom pane. Look for the long green bar that indicates that the test passed. 19. Continue editing the BBA_test1.java test file. Add another Expecter for the application's second output stream:... new Object[] { , "IBM", }); Expecter expecter2 = new Expecter(server.getDequeuer("BestBids")); expecter2.expect(objectarraytuplemaker.maker, new Object[] { , "IBM", }); 20. Save and re-run the test file. Studio reports another success. 21. Edit the test file again, this time introducing a deliberate error. Change the value to : new Object[] { , "IBM", }); 22. Save and re-run the test. This time, Studio reports an error and shows the expected and received values: Tip 148

149 23. Click the Compare Actual button (circled in red in the image above) to hone in on the exact failure: 24. Edit the Java test file again and correct the error. 25. Now, run the same JUnit test from the command prompt with these steps: a. Create a server configuration file for the project with File â New â Other â StreamBase â Server Configuration File. Do not select the Populate with... option. Name the file sbd.sbconf. b. Edit the file to have the following bare minimum configuration: <?xml version="1.0" encoding="utf-8"?> <streambase-configuration xmlns:xi=" xmlns:xsi=" xsi:nonamespaceschemalocation=" <java-vm> <dir path="./java-bin"/> </java-vm> </streambase-configuration> c. Open a UNIX terminal window or a StreamBase command prompt, and navigate to the project folder sample_bestbidsandasks in your Studio workspace. Tip 149

150 d. Tip On Windows, select the project folder in the Package Explorer view. Right-click, then select StreamBase â Open StreamBase Command Prompt Here from the context menu. Run the test with the following command: sbunit -f sbd.sbconf com.example.sbjunit.bba_test1 e. Look for output like the following that shows success: StreamBase unit test runner invoking JUnit... JUnit version 4.7. Time: OK (1 test) 26. Continue to experiment with different settings in the Java test file. Use the Javadoc documentation for the com.streambase.sb.unittest package as a guide to the test features available. See Java API Documentation. Debugging StreamBase JUnit Tests Starting with release 7.2.2, you can set breakpoints on both JUnit test code and EventFlow arcs for a module under test. Debugging the JUnit test then automatically switches between Java debugging and EventFlow debugging of the module under test. To illustrate this feature, set two Java breakpoints in the BBA_test1.java file created in the previous section: Set a breakpoint on the line that assigns a value to the inputtupleasjsonstring string around line 50. Set another breakpoint on the first Expecter expecter line. In the BestBidsAsks.sbapp EventFlow file, set an EventFlow breakpoint on the arc exiting the input stream NYSE_Feed. Now, follow these steps: 1. Switch back to the BBA_test1 Java file and run it using the Debug button ( )instead of the Run button. 2. This opens the Eclipse Debug perspective. Execution pauses at the first Java breakpoint. 3. In the Debug view, click the Continue button ( ) or press F8. This opens the EventFlow Editor canvas and pauses execution at the first EventFlow breakpoint. 4. Press the Continue button again, or optionally step through the EventFlow module. In either case, execution returns to the Java file and pauses on the second Java breakpoint. Not all features of Java debugging are implemented when debugging StreamBase JUnit test code. For example, those familiar with Java debugging in Eclipse might expect the following sequence to work: When paused at the first Java breakpoint, click the Step Into (F5) button, which stops on the enqueue() line. Click Step Return (F7) to return to the first breakpoint. Tip 150

151 Click Step Into again to go back into the enqueue() implementation. Instead, pressing Step Return takes you to the EventFlow file. Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Debugging StreamBase JUnit Tests 151

152 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase Tests (sbtest) StreamBase Tests (sbtest) StreamBase Unit Tests provide a way to verify that an application module produces the exact expected output for specific input. This testing methodology is designed to ensure that the module under test continues to behave correctly after you have made changes to the module. The StreamBase Test mechanism is comparable to a macro recording and playback feature, and does not require writing or editing Java code. By contrast, StreamBase Studio also provides the StreamBase Unit Test mechanism, which is a true JUnit subsystem and API that provides an alternate method of module unit testing using Java-based test files. See StreamBase JUnit Tests for more on StreamBase JUnit Test features. You develop and run StreamBase Tests to confirm that StreamBase applications return the expected result tuples given known input streams. You can organize a group of tests to be executed together as a StreamBase Test suite. Create StreamBase Tests in one of two ways: By recording a test configuration while an application is running and saving the results, as described in Creating and Running StreamBase Tests. The recording includes feed simulations representing input data, validation CSV files representing output data, and a test configuration file to tie them together as a unified test. By creating and editing a test configuration file with.sbtest extension, along with the input and verification files the test needs. A StreamBase Test suite contains a set of StreamBase Tests to be run as a group. Create a StreamBase Test suite by moving a group of related test configuration files, plus all the resources those tests require, to the same folder, as described in Creating and Running Test Suites. Once a StreamBase Test or a Test suite is prepared, you can run it: Contents Interactively in Studio, in the SB Test/Debug perspective. From the shell command prompt with the sbtest command. As part of an automated application test script. StreamBase Tests (sbtest) 152

153 Creating and Running StreamBase Tests Using the StreamBase Test Editor Creating and Running Test Suites Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us StreamBase Tests (sbtest) 153

154 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase Tests (sbtest) > Creating and Running StreamBase Tests Creating and Running StreamBase Tests Contents Overview of Steps Creating a StreamBase Test by Recording Creating a Unit Test Manually Running a Test in Studio Running a Test from the Command Line Editing Data Validation CSV Files StreamBase Test Support Library StreamBase Test Limitations Related Topics Overview of Steps Follow these steps to configure and run a StreamBase Test for your StreamBase application modules: 1. Make sure the application module to be tested is free of typecheck errors. 2. Optional. Create feed simulation files for your application's input streams and prepare any CSV input data files needed by the feed simulations. 3. Run the application and confirm that it operates as expected with your feed simulation files, or with other input data. 4. Create a StreamBase Test configuration by running the application and recording its interaction with input data. The input data can come from manual input, from your development feed simulation files, or from playback in the Recording view of the SB Test/Debug perspective. When you record a test configuration, StreamBase creates the following: A StreamBase Test feed simulation file for each input port, that records what went in that port during the recording. (These are not the same as your application's data feed simulation files.) A data validation CSV file for each output port that records what came out that port during the recording. A StreamBase Test configuration file with an.sbtest extension to tie all the pieces together into a single test. Creating and Running StreamBase Tests 154

155 5. Modify the Test configuration file and the recorded data validation CSV files as required: Specify which CSV data values can be excluded from the test. That is, specify values that can differ from the original recording without causing a test failure. Specify any CSV data values or other criteria that might differ from the recorded behavior, but must be true for a test to pass. 6. Run the StreamBase Test in Studio and observe the pass or fail results. 7. If the test fails, modify the test configuration and data validation CSV files. Keep running the test until you get consistent pass results. 8. Optional. When the StreamBase Test produces consistent pass results in Studio, run the test again with the sbtest command line utility. 9. Optional. Add the command line test scenario to your build and test automation procedures. Once a StreamBase Test is developed and verified, you can use it later to confirm correct behavior of your application or module, or to confirm there have been no regressions in desired behavior. A developed test can be run in Studio, from the command line interactively, or from test scripts that call the command line test. You can also run it as part of a group of tests by creating a StreamBase Test Folder. Note Test/Debug Guide To get started with StreamBase Tests, you can create a test configuration by recording your application receiving input from the Manual Input view in Studio instead of from feed simulations. However, to create a production-ready test with real-world input stream speed, use feed simulations. Advanced users can create unit test configurations manually by creating data validation CSV files and creating a StreamBase Test file in the StreamBase Test Editor. StreamBase Systems recommends starting with a unit test recording and editing the generated test files. If you have two or more tests for an application and you want to run the tests together, create a StreamBase Test Folder, as described in Creating and Running Test Suites. Creating a StreamBase Test by Recording Use the following procedure to create a StreamBase Test configuration by recording the interaction of a running application with feed simulations: 1. Run the application as described in Running Applications in Studio. 2. While the application is running, invoke File â New â Other â StreamBase Test (.sbtest). This brings up the New StreamBase Test dialog. Overview of Steps 155

156 The Record a test check box is selected by default. (This check box is dimmed if you run File â New â Other â StreamBase Test (.sbtest) without running an application first.) The Do not record input streams that send no data check box is also selected by default. This reduces the number of input test data files generated to cover only the input streams actually in use as you record the test. 3. Select a folder in your workspace, and enter a base name for the test file. Click Finish. 4. Clicking Finish starts the recording and opens the Recording StreamBase Test dialog. This dialog stays open while the test is recording input and output data. 5. Supply data to the application using the Manual Input view, the Feed Simulations view, the Recordings view, or from an external source. The Recording dialog continuously reports the number of tuples recorded for use by the test. 6. To stop recording, click Finish in the Recording dialog. 7. The Test Editor opens, showing the current settings for the test just recorded. 8. Save the test configuration file and run it, or modify the test settings as described in Using the StreamBase Test Editor. Creating a StreamBase Test by Recording 156

157 The test recording saved one data validation CSV file in the project folder for each output port in the application under test. These files are named with a combination of the test name and the port name, with.csv extension. You may need to edit these CSV files to include or exclude certain fields from causing a test failure, as described in Editing Data Validation CSV Files. Creating a Unit Test Manually Advanced users can create StreamBase Test configurations without a running application. The following procedure creates a new, empty test configuration file in which you must specify all components of the test manually. 1. With no application running, invoke File â New â Other â StreamBase Test (.sbtest). This brings up the New StreamBase Test dialog. (The Record a test check box is dimmed in these circumstances.) 2. Select a folder in your workspace, and enter a base name for the test file. You can optionally use Advanced to link to an existing test configuration file instead of creating a new one. Click Finish. 3. A dialog pops up offering to add the StreamBase Test Support Library to this project's Java Build Path. Click Yes, then OK to perform this action. See Test Support Library for more information. 4. The Test Editor opens showing default and empty settings for the new test. 5. Edit the test configuration, specifying the feed simulation files and data validation CSV files to use, as described in Using the StreamBase Test Editor. 6. Save the test configuration file. 7. Copy or create one or more data validation CSV files for the test to validate against. You can also create a StreamBase Test configuration by copying an existing test configuration file. Use copy and paste in the Package Explorer to select and copy an existing, known working test configuration file with.sbtest extension. Then double-click the copied file in Package Explorer, which opens the Test Editor. Edit the copied test file as described in Using the StreamBase Test Editor. Running a Test in Studio Test/Debug Guide Running a test requires a test launch configuration. Studio provides a default test launch configuration that lets you test right away without stopping to configure a launch configuration. You can also create and edit customized test launch configuration the same way you would configure run and debug launch configurations. See Editing Launch Configurations for details. Your test configuration's Target Application section uses either a Specified Application File or Specified StreamBase URI to locate the application to test. In this case, you can run the test immediately, using the default test launch configuration. Your test configuration's Target Application section specifies Default. In this case, you must start the application to test before running the test, or you must use an application-specific test launch configuration. Running a Test with Default Launch Configuration To run a StreamBase Test using Studio's default launch configuration, follow these steps: 1. Open the EventFlow or StreamSQL file for the application under test. 2. Open the test configuration file that tests this application. This places both editor sessions side by Creating a Unit Test Manually 157

158 side. 3. Run the application by making the application's editor session active, then clicking the Run button. This opens the SB Test/Debug perspective and starts the application. 4. Now make the Test Editor session active and click the Run button again. The Run button always runs the file in the currently selected editor session. Thus, this time, the test is run, using the default test launch configuration. 5. The test runs. Remember that a running test takes about as much time as it took to record the test, so give the test time to finish. 6. When the test finishes, look for results in the JUnit view. A long green bar in this view denotes test success. A red bar is accompanied by information in the Failure Trace pane to help you track down the cause of the test failure. Running a Test with a Saved Launch Configuration To run a test with a test-specific launch configuration, follow these steps: 1. Save a test configuration that specifies a Specified Application File or Specified StreamBase URI to locate the application to test. 2. Open Run â Run Configurations. 3. Select StreamBase Test and click the New Configuration button (or use the Duplicate button to copy an existing configuration). 4. Studio automatically selects the configuration temporarily named New_configuration. 5. Give the test launch configuration a name. 6. Use Browse to locate the test configuration file you saved in step Studio reads the name of the Target Application from the test configuration file. 8. Click Apply, then Run. Once a launch configuration has run, Studio places its name in the Run â Run History menu. To run this configuration again, select it from the this menu. When you run a saved launch configuration, Studio starts the application named in the StreamBase Test configuration, then runs the test. You do not need to start the application first. Interpreting the Results of Studio Tests Test progress and results are shown in the JUnit view. The test begins automatically and takes about as long to complete as the test recording took. On completion, look for a green bar to indicate success or a red bar to indicate test failure. If the test failed, the Failure Trace pane shows information to help you track down the cause of the failure. Studio inherits the JUnit view from the Eclipse Java Development Tools feature. Many of the controls in this view apply to Java unit testing, but not to StreamBase testing. For instructions on using controls in the JUnit view, search for "JUnit" in the Java Development User Guide in the Studio help system. Running a Test with Default Launch Configuration 158

159 Running a Test from the Command Line StreamBase provides the utility sbtest to run StreamBase tests at the command line. Use this utility in a terminal window on UNIX or in a StreamBase Command Prompt on Windows. Use steps like the following: 1. Save a StreamBase Test configuration that specifies a Specified Application File or Specified StreamBase URI to locate the application to test. 2. In a UNIX terminal or StreamBase Command Prompt, navigate to the workspace directory that contains the project with your test file. 3. Run the test with a command like the following: sbtest test-name.sbtest 4. The test reports a single period to the console to indicate it is running. 5. On completion, the test returns 0 for success and non-zero for failure. This allows runs of sbtest to be included in test scripts appropriate for the operating system. On completion, sbtest reports the time taken to execute the test in seconds and milliseconds and either OK for successful completion or There was 1 error followed by the stack trace and a summary count of the total number of tests, failures, and errors. Editing Data Validation CSV Files The data validation CSV files used by the StreamBase Test framework for output streams have field names on the first row, which must match field names in the schema of the target stream. Validation CSV files for input streams do not need to have a header row. Subsequent column values must be convertible from type string to the target StreamBase data type. The only exception is for CSV files representing output stream values where the field value matches the Ignore Field Character defined in the test configuration. If a value in an incoming field matches the Ignore Field Character, then any value is accepted for that field on the output stream does not cause the test to fail. If a specific value is defined, then that specific value must appear in the output stream tuple when it is expected. If a test is created by recording a running application, timestamp fields in the recorded output are usually not correct for subsequent runs of the test. Instead, replace the timestamp value in the output CSV file with the Ignore Field Character. This prevents tests from failing because of mismatched timestamps. StreamBase Test Support Library To run a StreamBase Test in Studio, the StreamBase Test Support Library must be in the Java Build Path of the project that contains the test. This library does not have to be in the path when recording a test, or when editing a test configuration. This library is not required when running a test with the command-line utility, sbtest, because sbtest already includes the library. When you click Finish in the New StreamBase Test dialog, another dialog offers to add the Test Support library for you under the following circumstances: No application is running. An application is running, but you did not check the Record a test now check box. In this case, Studio prompts for permission to shut down the currently running application in order to add the library. Running a Test from the Command Line 159

160 If you create a new StreamBase Test configuration while an application is running, and you check the Record a test now check box, the New StreamBase Test dialog does not offer to add the Test Support Library for you. (Doing so would require stopping the application you are about to record.) In this case, you must add the Test Support Library manually after recording the test. Follow these steps to add the StreamBase Test Support Library to a project: 1. Select the project in the Package Explorer. 2. Right-click and select Build Path â Add Libraries from the context menu. 3. Select StreamBase Test Support and click Next. 4. On the next panel, click Finish. 5. Click OK. Adding the StreamBase Test Support Library automatically adds the StreamBase Client Library as well. StreamBase Test Limitations Test/Debug Guide Be aware of the following limitations of StreamBase Tests. Test resources must reside in the same directory as the test configuration file. Test resources include: Feed simulation files for each input stream. Any CSV input data files used by those feed simulations. The data validation CSV files created by the test recording (or created manually). The StreamBase Test mechanism runs in a separate Java VM than the application under test. For applications with large memory requirements, this can lead to memory resource conflicts. While you can adjust the runtime parameters of the VM running your application, there are no adjustments available for the VM running the test. StreamBase tests the top-level application or module it is given to test. If that application refers to another StreamBase application as a module, the operation of the referenced module is treated normally, as a simple component at the top level with inputs and outputs. If your application is composed of several modules, create separate unit tests for each module and test each module independently. If the application under test contains an output adapter, the output streams of that adapter cannot be tested. Output adapters send output to file formats or data sources not under the control of the StreamBase Test mechanism. If your application is constructed in a way that allows you to exclude an output stream that goes to an adapter (exclude without breaking the application), then you can create a test that excludes the output adapter. If the output adapter is more closely integrated with your application, you cannot create a StreamBase test for that application. Be cautious when recording a test against an application with inputs derived from input adapters. The general recommendation is that you exclude from a test any input adapters whose output is not always the same with each application execution. Do this by removing from the test's expected output data definition any streams that are downstream from the adapter. Some applications to be tested have components with predicate settings that change the stream in non-repeatable ways. For example, a component might call the random() function as part of its processing. The tests for such applications are likely to fail with each run. You may be able to exclude the column that contains such fields by using the Ignore Field Character. Alternatively, you may be able to exclude from the test an input or output stream so as to avoid the path with the non-repeating StreamBase Test Support Library 160

161 predicate setting. Related Topics Using the StreamBase Test Editor Creating and Running Test Suites Back to Top ^ Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us StreamBase Test Limitations 161

162 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase Tests (sbtest) > Using the StreamBase Test Editor Using the StreamBase Test Editor Contents Introduction Opening the Test Editor Target Application Section Input Feed Simulations Section Output Streams to Verify Section Advanced Properties Section Related Topics Introduction The Test Editor is StreamBase Studio's interface for creating unit test files. A unit test file is an XML representation of the various components that allow a StreamBase test to run the same way repeatedly. Unit test files are saved in the current project folder with the.sbtest extension. Opening the Test Editor The Test Editor opens automatically under the following circumstances: You begin recording a test scenario with an application running, as described in Creating and Running StreamBase Tests. When the recording completes, the Test Editor opens so that you can complete the configuration of the test scenario just recorded. You create a new, empty test file without an application running by invoking File â New â StreamBase Test. After specifying the target folder and test file name, the Test Editor opens. You double-click the name of an existing test file in the Package Explorer. The Test Editor window has four major sections: Target Application Section Input Feed Simulations Section Output Streams to Verify Section Advanced Properties Section Using the StreamBase Test Editor 162

163 Target Application Section Test/Debug Guide Use this section to identify the application to test. Applications specified here are presumed to be already running when the test starts, or specified in a test launch configuration that starts the application and test at the same time. The options are: Default. Run this test against the application running at the time the test is run. Use this option to set up a test that can be run against several applications, or against more than one version of an application. Specified Application File. Run this test against the specific application specified in this field. Specified StreamBase URI. When this test runs, connect to the StreamBase Server instance at this URI, and run the test against the application running on that server. Input Feed Simulations Section Use this section to identify one or more feed simulation configuration files that will feed data to the test application's input streams while the test is running. This section has three buttons: Add. Opens the Add Input Feed Simulations dialog, which allows you to select from the feed simulation files in the current project. Edit. Opens the standard Feed Simulation editor to edit the selected feed simulation file. Remove. Removes the selected feed simulation file from this test configuration. Output Streams to Verify Section Use this section to identify one or more output streams in the application specified above, as well as the expected schemas of those streams, and their test criteria. The left side of this section has three buttons: Add. Opens the Add Output Stream to Verify dialog, which allows you to select from the streams in the application under test. Edit. Opens the standard Schema editor to edit the selected output stream. Remove. Removes the selected output stream from this test configuration. The selected output stream is verified using the criteria in the Selected Output Stream Options area on the right side of this section. Select an output stream on the left to see the options on the right for that stream. The options are: Data Validation File. The name of the CSV file containing the expected output of the selected stream. Validation CSV files created during a test recording are named from the test name plus the stream name. Validation CSV files that you create manually can have any name. Ignore Field Character. A single wildcard character found in a CSV file value that matches any output from the stream. The default character is *. Dequeue timeout (seconds). The acceptable duration for a successful dequeue operation. The default is 20 seconds. Use Strict Schema Validation. The test framework compares output stream schemas in the target application to expected schemas defined in the test. The comparison can find more or fewer fields and data types in the target application, or an exact match. If this option is checked, the comparison test looks for an exact match between application and unit test schemas. If cleared, application schemas can have more or fewer fields than defined in the test without causing the test to fail. Target Application Section 163

164 Validate Tuple Ordering. If cleared, tuples can output in any order. As long as there is a match available in the validation CSV file, the test does not fail. Map to sub-fields of tuple fields. Use this check box to specify that the fields of a flat CSV file are to be mapped to the sub-fields of tuple fields, not to the tuple fields themselves. This feature lets Studio read flat CSV files generated manually or generated by third-party applications such as Microsoft Excel, and lets Studio apply such files to schemas that have tuple fields. See Map to Sub-Fields Option for more on this subject. Advanced Properties Section Use this section to specify pre-test setup scripts and post-test cleanup scripts. Scripts are any script or program executable at the shell prompt of the operating system on which the test is run. The fields are: Pre Test script. The script or program to be run before this test begins. Post Test script. The script or program to be run after this test finishes. Script timeout (milliseconds). Specifies how long to wait for either pre- or post- script to finish before continuing with the next step in the test. Related Topics Creating and Running StreamBase Tests Back to Top ^ Test/Debug Guide Copyright  StreamBase Systems, Inc. All Rights Reserved. Contact Us Output Streams to Verify Section 164

165 Site Map Index HomeInstallationStartAuthoringStreamSQLTest/DebugAPI GuideAdminAdaptersSamplesStudio GuideReferences Current Location: Home > Test/Debug Guide > StreamBase Tests (sbtest) > Creating and Running Test Suites Creating and Running Test Suites Contents Creating Test Suites Configuring and Running a Test Suite in StreamBase Studio Running a Test Suite from the Command Line Notes About StreamBase Test Suites Related Topics Creating Test Suites StreamBase supports batch execution of tests by means of StreamBase test suites. StreamBase test suites are folders that contain tests and resources for the tests. The folders are just like any other folder in the file system except for the way they can be interpreted by StreamBase. Applications associated with the tests in a test suite can be in the test suite folder, in the parent folder of the test suite folder, or specified on the sbtest command line. All resources required for a test must be in the same test suite folder as the test that requires them. You can configure and run test suites in StreamBase Studio, or by means of the sbtest command. Configuring and Running a Test Suite in StreamBase Studio When you have existing tests that you want to run as a group, you can create a folder anywhere in the file system, copy the tests into the folder, and then import the folder into the StreamBase project containing the application to be tested. You can also create a new folder in a StreamBase project and import tests and resources to make it a test suite. To specify batch execution of tests, specify their folder when you create the test in the Run dialog, as shown: Creating and Running Test Suites 165

166 In this illustration the test suite is /sample_firstapp/test_folder1. When you run the test suite you can select an option to use a new StreamBase Server instance for each test, as shown: You can improve performance by running all the tests in a single server instance, but that option can cause issues resulting from side effects of the tests. Also,if you select this option in a case where tests use different applications each application runs in its own server instance and you will not see a performance improvement Configuring and Running a Test Suite in StreamBase Studio 166

167 by selecting this option. When you run a test suite containing one or more tests that specify Default as the target application, StreamBase Studio prompts you to choose an application. If a test explicitly specifies an application, StreamBase Studio looks for the application in the test folder and in its parent folder. Then, if the application is not found, Studio prompts you to choose among several (if more than one with the same name is found). StreamBase automatically starts servers as required to run the applications being tested. Example of Using Test Suites in StreamBase Studio This example shows a test suite for firstapp, which is the StreamBase sample project is used in the Getting Started Tutorial. The application is as follows: The following directory structure shows sample_firstapp, which contains firstapp.sbapp and test_folder1, a test suite that has been added to the original project. This test suite holds three test files and their associated resources for firstapp.sbapp. Example of Using Test Suites in StreamBase Studio 167

168 The tests in this test suite have been created with the target application set to Default. See test1.sbtest for an example: <?xml version="1.0" encoding="utf-8"?> <sbtest:suite xmi:version="2.0" xmlns:xmi=" xmlns:sbtest=" <Tests UseDefaultTarget="true"> <Outputs URI="test1-AllTheRest.csv" DequeueTimeoutSeconds="20"> <Stream StreamName="AllTheRest"> <Fields Name="symbol" DataTypeName="string"/> <Fields Name="quantity" DataTypeName="int"/> </Stream> </Outputs> <Outputs URI="test1-BigTrades.csv" DequeueTimeoutSeconds="20"> <Stream StreamName="BigTrades"> <Fields Name="symbol" DataTypeName="string"/> <Fields Name="quantity" DataTypeName="int"/> </Stream> </Outputs> <Inputs URI="test1-feedsim.sbfs"/> </Tests> </sbtest:suite> Example of Using Test Suites in StreamBase Studio 168

169 To run the tests in test_folder_1: 1. Select the test suite (test_folder_1) and open the Run Dialog. 2. Select StreamBase Test as the kind of configuration you want to specify. 3. Type a name for the configuration and select the option to run all the tests in test_folder_1, as shown in the previous illustration. 4. Click Run. 5. When the dialog prompts you to select the application to run, click firstapp.sbapp. 6. Watch the Console and the JUnit view to see test execution progress and final results of the tests. The JUnit view provides a quick view of success or failure: The Console shows progress while each test is running: The Console shows additional information for tests that fail: Running a Test Suite from the Command Line Using the sbtest command to run a test suite is similar to running just one test from the command line. To run the test suite shown in the preceding test_folder1 example, issue the following command from the parent folder for the test suite: sbtest test_folder1 Running a Test Suite from the Command Line 169

User Guide. Introduction. Requirements. Installing and Configuring. C Interface for NI myrio

User Guide. Introduction. Requirements. Installing and Configuring. C Interface for NI myrio User Guide C Interface for NI myrio Introduction The C interface for NI myrio is designed for users who want to program the NI myrio using the C programming language or a programming language other than

More information

DB2 for z/os Stored Procedure support in Data Server Manager

DB2 for z/os Stored Procedure support in Data Server Manager DB2 for z/os Stored Procedure support in Data Server Manager This short tutorial walks you step-by-step, through a scenario where a DB2 for z/os application developer creates a query, explains and tunes

More information

EUSurvey OSS Installation Guide

EUSurvey OSS Installation Guide Prerequisites... 2 Tools... 2 Java 7 SDK... 2 MySQL 5.6 DB and Client (Workbench)... 4 Tomcat 7... 8 Spring Tool Suite... 11 Knowledge... 12 Control System Services... 12 Prepare the Database... 14 Create

More information

At the shell prompt, enter idlde

At the shell prompt, enter idlde IDL Workbench Quick Reference The IDL Workbench is IDL s graphical user interface and integrated development environment. The IDL Workbench is based on the Eclipse framework; if you are already familiar

More information

Colligo Engage Outlook App 7.1. Connected Mode - User Guide

Colligo Engage Outlook App 7.1. Connected Mode - User Guide 7.1 Connected Mode - User Guide Contents Colligo Engage Outlook App 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Engage Outlook App 2 Checking for Updates 3 Updating

More information

EMC Documentum Composer

EMC Documentum Composer EMC Documentum Composer Version 6.5 SP2 User Guide P/N 300-009-462 A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 2009 EMC Corporation. All

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for Oracle E-Business Suite Installation. Software Release 1.1 January 2011

TIBCO ActiveMatrix BusinessWorks Plug-in for Oracle E-Business Suite Installation. Software Release 1.1 January 2011 TIBCO ActiveMatrix BusinessWorks Plug-in for Oracle E-Business Suite Installation Software Release 1.1 January 2011 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE

More information

Colligo Manager 5.4 SP3. User Guide

Colligo  Manager 5.4 SP3. User Guide 5.4 SP3 User Guide Contents Enterprise Email Management for SharePoint 2010 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Email Manager 2 Checking for Updates 4 Updating

More information

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1 Using the VMware vcenter Orchestrator Client vrealize Orchestrator 5.5.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

2 Getting Started. Getting Started (v1.8.6) 3/5/2007

2 Getting Started. Getting Started (v1.8.6) 3/5/2007 2 Getting Started Java will be used in the examples in this section; however, the information applies to all supported languages for which you have installed a compiler (e.g., Ada, C, C++, Java) unless

More information

Using the VMware vrealize Orchestrator Client

Using the VMware vrealize Orchestrator Client Using the VMware vrealize Orchestrator Client vrealize Orchestrator 7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

User Guide Zend Studio for Eclipse V6.1

User Guide Zend Studio for Eclipse V6.1 User Guide Zend Studio for Eclipse V6.1 By Zend Technologies, Inc. www.zend.com Disclaimer The information in this help is subject to change without notice and does not represent a commitment on the part

More information

Eclipse Quick Reference Windows Hosted

Eclipse Quick Reference Windows Hosted Eclipse Quick Reference Windows Hosted Menus and Keyboard Shortcuts (some menus/items can be hidden in any perspective) File Menu New Open Path Open File Close Close All Save Save As Save All Revert Move

More information

Using Eclipse for Java. Using Eclipse for Java 1 / 1

Using Eclipse for Java. Using Eclipse for Java 1 / 1 Using Eclipse for Java Using Eclipse for Java 1 / 1 Using Eclipse IDE for Java Development Download the latest version of Eclipse (Eclipse for Java Developers or the Standard version) from the website:

More information

CodeWarrior Development Studio for Power Architecture Processors FAQ Guide

CodeWarrior Development Studio for Power Architecture Processors FAQ Guide CodeWarrior Development Studio for Power Architecture Processors FAQ Guide Document Number: CWPAFAQUG Rev. 10.x, 06/2015 2 Freescale Semiconductor, Inc. Contents Section number Title Page Chapter 1 Introduction

More information

User's Guide c-treeace SQL Explorer

User's Guide c-treeace SQL Explorer User's Guide c-treeace SQL Explorer Contents 1. c-treeace SQL Explorer... 4 1.1 Database Operations... 5 Add Existing Database... 6 Change Database... 7 Create User... 7 New Database... 8 Refresh... 8

More information

EUSurvey Installation Guide

EUSurvey Installation Guide EUSurvey Installation Guide Guide to a successful installation of EUSurvey May 20 th, 2015 Version 1.2 (version family) 1 Content 1. Overview... 3 2. Prerequisites... 3 Tools... 4 Java SDK... 4 MySQL Database

More information

Migration from HEW to e 2 studio Development Tools > IDEs

Migration from HEW to e 2 studio Development Tools > IDEs Migration from HEW to e 2 studio Development Tools > IDEs LAB PROCEDURE Description The purpose of this lab is to allow users of the High-performance Embedded Workbench (HEW) to gain familiarity with the

More information

with TestComplete 12 Desktop, Web, and Mobile Testing Tutorials

with TestComplete 12 Desktop, Web, and Mobile Testing Tutorials with TestComplete 12 Desktop, Web, and Mobile Testing Tutorials 2 About the Tutorial With TestComplete, you can test applications of three major types: desktop, web and mobile: Desktop applications - these

More information

Getting Started with Eclipse for Java

Getting Started with Eclipse for Java Getting Started with Eclipse for Java Maria Litvin Phillips Academy, Andover, Massachusetts Gary Litvin Skylight Publishing 1. Introduction 2. Downloading and Installing Eclipse 3. Importing and Exporting

More information

EMC Documentum Composer

EMC Documentum Composer EMC Documentum Composer Version 6.0 SP1.5 User Guide P/N 300 005 253 A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com Copyright 2008 EMC Corporation. All

More information

Dell License Manager Version 1.2 User s Guide

Dell License Manager Version 1.2 User s Guide Dell License Manager Version 1.2 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either

More information

TIBCO Kabira Adapter Factory for SNMP Installation. Software Release December 2017

TIBCO Kabira Adapter Factory for SNMP Installation. Software Release December 2017 TIBCO Kabira Adapter Factory for SNMP Installation Software Release 5.9.5 December 2017 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for OData User's Guide

TIBCO ActiveMatrix BusinessWorks Plug-in for OData User's Guide TIBCO ActiveMatrix BusinessWorks Plug-in for OData User's Guide Software Release 6.0.1 November 2016 Two-Second Advantage 2 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE.

More information

AccuBridge for IntelliJ IDEA. User s Guide. Version March 2011

AccuBridge for IntelliJ IDEA. User s Guide. Version March 2011 AccuBridge for IntelliJ IDEA User s Guide Version 2011.1 March 2011 Revised 25-March-2011 Copyright AccuRev, Inc. 1995 2011 ALL RIGHTS RESERVED This product incorporates technology that may be covered

More information

Protection! User Guide. A d m i n i s t r a t o r G u i d e. v L i c e n s i n g S e r v e r. Protect your investments with Protection!

Protection! User Guide. A d m i n i s t r a t o r G u i d e. v L i c e n s i n g S e r v e r. Protect your investments with Protection! jproductivity LLC Protect your investments with Protection! User Guide Protection! L i c e n s i n g S e r v e r v 4. 9 A d m i n i s t r a t o r G u i d e tm http://www.jproductivity.com Notice of Copyright

More information

GSS Administration and Troubleshooting

GSS Administration and Troubleshooting CHAPTER 9 GSS Administration and Troubleshooting This chapter covers the procedures necessary to properly manage and maintain your GSSM and GSS devices, including login security, software upgrades, GSSM

More information

Unit III: Working with Windows and Applications. Chapters 5, 7, & 8

Unit III: Working with Windows and Applications. Chapters 5, 7, & 8 Unit III: Working with Windows and Applications Chapters 5, 7, & 8 Learning Objectives In this unit, you will: Launch programs and navigate the Windows task bar. Perform common windows functions. Customize

More information

WPS Workbench. user guide. To help guide you through using WPS Workbench to create, edit and run programs. Workbench user guide Version 3.

WPS Workbench. user guide. To help guide you through using WPS Workbench to create, edit and run programs. Workbench user guide Version 3. WPS Workbench user guide To help guide you through using WPS Workbench to create, edit and run programs Version: 3.3.4 Copyright 2002-2018 World Programming Limited www.worldprogramming.com Contents Introduction...7

More information

Colligo Engage Outlook App 7.1. Offline Mode - User Guide

Colligo Engage Outlook App 7.1. Offline Mode - User Guide Colligo Engage Outlook App 7.1 Offline Mode - User Guide Contents Colligo Engage Outlook App 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Engage Outlook App 3 Checking

More information

Eclipse Plug-in for AccuRev User s Guide Version April 2012

Eclipse Plug-in for AccuRev User s Guide Version April 2012 Eclipse Plug-in for AccuRev User s Guide Version 2012.1 April 2012 Revised 4/16/12 Copyright AccuRev, Inc. 1995 2012 ALL RIGHTS RESERVED This product incorporates technology that may be covered by one

More information

NetBeans Tutorial. For Introduction to Java Programming By Y. Daniel Liang. This tutorial applies to NetBeans 6, 7, or a higher version.

NetBeans Tutorial. For Introduction to Java Programming By Y. Daniel Liang. This tutorial applies to NetBeans 6, 7, or a higher version. NetBeans Tutorial For Introduction to Java Programming By Y. Daniel Liang This tutorial applies to NetBeans 6, 7, or a higher version. This supplement covers the following topics: Getting Started with

More information

Marthon User Guide. Page 1 Copyright The Marathon developers. All rights reserved.

Marthon User Guide. Page 1 Copyright The Marathon developers. All rights reserved. 1. Overview Marathon is a general purpose tool for both running and authoring acceptance tests geared at the applications developed using Java and Swing. Included with marathon is a rich suite of components

More information

AppDev StudioTM 3.2 SAS. Migration Guide

AppDev StudioTM 3.2 SAS. Migration Guide SAS Migration Guide AppDev StudioTM 3.2 The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2006. SAS AppDev TM Studio 3.2: Migration Guide. Cary, NC: SAS Institute Inc.

More information

Eclipse Tutorial. For Introduction to Java Programming By Y. Daniel Liang

Eclipse Tutorial. For Introduction to Java Programming By Y. Daniel Liang Eclipse Tutorial For Introduction to Java Programming By Y. Daniel Liang This supplement covers the following topics: Getting Started with Eclipse Choosing a Perspective Creating a Project Creating a Java

More information

MarkLogic Server. Query Console User Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Query Console User Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved. Query Console User Guide 1 MarkLogic 9 May, 2017 Last Revised: 9.0-1, May, 2017 Copyright 2017 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents Query Console User Guide 1.0

More information

Function. Description

Function. Description Function Check In Get / Checkout Description Checking in a file uploads the file from the user s hard drive into the vault and creates a new file version with any changes to the file that have been saved.

More information

Infor LN Studio Administration Guide

Infor LN Studio Administration Guide Infor LN Studio Administration Guide Copyright 2015 Infor Important Notices The material contained in this publication (including any supplementary information) constitutes and contains confidential and

More information

Module 3: Working with C/C++

Module 3: Working with C/C++ Module 3: Working with C/C++ Objective Learn basic Eclipse concepts: Perspectives, Views, Learn how to use Eclipse to manage a remote project Learn how to use Eclipse to develop C programs Learn how to

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for REST and JSON Installation. Software Release 1.0 November 2012

TIBCO ActiveMatrix BusinessWorks Plug-in for REST and JSON Installation. Software Release 1.0 November 2012 TIBCO ActiveMatrix BusinessWorks Plug-in for REST and JSON Installation Software Release 1.0 November 2012 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH

More information

MarkLogic Server. Query Console User Guide. MarkLogic 9 May, Copyright 2018 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Query Console User Guide. MarkLogic 9 May, Copyright 2018 MarkLogic Corporation. All rights reserved. Query Console User Guide 1 MarkLogic 9 May, 2017 Last Revised: 9.0-7, September 2018 Copyright 2018 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents Query Console User Guide

More information

CodeWarrior Development Studio for Advanced Packet Processing FAQ Guide

CodeWarrior Development Studio for Advanced Packet Processing FAQ Guide CodeWarrior Development Studio for Advanced Packet Processing FAQ Guide Document Number: CWAPPFAQUG Rev. 10.2, 01/2016 2 Freescale Semiconductor, Inc. Contents Section number Title Page Chapter 1 Introduction

More information

Apptix Online Backup by Mozy User Guide

Apptix Online Backup by Mozy User Guide Apptix Online Backup by Mozy User Guide 1.10.1.2 Contents Chapter 1: Overview...5 Chapter 2: Installing Apptix Online Backup by Mozy...7 Downloading the Apptix Online Backup by Mozy Client...7 Installing

More information

Introduction. Key features and lab exercises to familiarize new users to the Visual environment

Introduction. Key features and lab exercises to familiarize new users to the Visual environment Introduction Key features and lab exercises to familiarize new users to the Visual environment January 1999 CONTENTS KEY FEATURES... 3 Statement Completion Options 3 Auto List Members 3 Auto Type Info

More information

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2010

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2010 CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2010 The process of creating a project with Microsoft Visual Studio 2010.Net is similar to the process in Visual

More information

HPE Security Fortify Plugins for Eclipse

HPE Security Fortify Plugins for Eclipse HPE Security Fortify Plugins for Eclipse Software Version: 17.20 Installation and Usage Guide Document Release Date: November 2017 Software Release Date: November 2017 Legal Notices Warranty The only warranties

More information

Avalanche Remote Control User Guide. Version 4.1

Avalanche Remote Control User Guide. Version 4.1 Avalanche Remote Control User Guide Version 4.1 ii Copyright 2012 by Wavelink Corporation. All rights reserved. Wavelink Corporation 10808 South River Front Parkway, Suite 200 South Jordan, Utah 84095

More information

Windows 8.1 User Guide for ANU Staff

Windows 8.1 User Guide for ANU Staff Windows 8.1 User Guide for ANU Staff This guide has been created to assist with basic tasks and navigating Windows 8.1. Further tips for using Windows 8.1 can be found on the IT Services website, or by

More information

Infor LN Studio Application Development Guide

Infor LN Studio Application Development Guide Infor LN Studio Application Development Guide Copyright 2016 Infor Important Notices The material contained in this publication (including any supplementary information) constitutes and contains confidential

More information

HP QuickTest Professional

HP QuickTest Professional HP QuickTest Professional Software Version: 10.00 Installation Guide Manufacturing Part Number: T6513-90038 Document Release Date: January 2009 Software Release Date: January 2009 Legal Notices Warranty

More information

BASIC USER TRAINING PROGRAM Module 5: Test Case Development

BASIC USER TRAINING PROGRAM Module 5: Test Case Development BASIC USER TRAINING PROGRAM Module 5: Test Case Development Objective Student will have an understanding of how to create, edit and execute a Test Case from Develop a Test Case Activity Page. Student will

More information

Using SQL Developer. Oracle University and Egabi Solutions use only

Using SQL Developer. Oracle University and Egabi Solutions use only Using SQL Developer Objectives After completing this appendix, you should be able to do the following: List the key features of Oracle SQL Developer Identify menu items of Oracle SQL Developer Create a

More information

VMware Mirage Web Manager Guide

VMware Mirage Web Manager Guide Mirage 5.3 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version and Eclipse

Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version and Eclipse Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version 1.1.0 and Eclipse Install, work with data perspectives, create connections, and create a project Skill Level: Intermediate

More information

Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version and Eclipse

Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version and Eclipse Introduction to IBM Data Studio, Part 1: Get started with IBM Data Studio, Version 1.1.0 and Eclipse Install, work with data perspectives, create connections, and create a project Skill Level: Intermediate

More information

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2005

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2005 CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2005 The process of creating a project with Microsoft Visual Studio 2005.Net is similar to the process in Visual

More information

A QUICK OVERVIEW OF THE OMNeT++ IDE

A QUICK OVERVIEW OF THE OMNeT++ IDE Introduction A QUICK OVERVIEW OF THE OMNeT++ IDE The OMNeT++ Integrated Development Environment is based on the Eclipse platform, and extends it with new editors, views, wizards, and additional functionality.

More information

Contents Upgrading BFInventory iii

Contents Upgrading BFInventory iii Upgrading ii Upgrading Contents Upgrading.............. 1 Upgrading to IBM Tivoli Endpoint Manager for Software Use Analysis version 2.0....... 1 Planning and preparing for the upgrade.... 2 Installing

More information

Partner Integration Portal (PIP) Installation Guide

Partner Integration Portal (PIP) Installation Guide Partner Integration Portal (PIP) Installation Guide Last Update: 12/3/13 Digital Gateway, Inc. All rights reserved Page 1 TABLE OF CONTENTS INSTALLING PARTNER INTEGRATION PORTAL (PIP)... 3 DOWNLOADING

More information

DS-5 ARM. Using Eclipse. Version Copyright ARM. All rights reserved. ARM DUI 0480L (ID100912)

DS-5 ARM. Using Eclipse. Version Copyright ARM. All rights reserved. ARM DUI 0480L (ID100912) ARM DS-5 Version 5.12 Using Eclipse Copyright 2010-2012 ARM. All rights reserved. ARM DUI 0480L () ARM DS-5 Using Eclipse Copyright 2010-2012 ARM. All rights reserved. Release Information The following

More information

Quick Start Guide TABLE OF CONTENTS COMMCELL ARCHITECTURE OVERVIEW COMMCELL SOFTWARE DEPLOYMENT INSTALL THE COMMSERVE SOFTWARE

Quick Start Guide TABLE OF CONTENTS COMMCELL ARCHITECTURE OVERVIEW COMMCELL SOFTWARE DEPLOYMENT INSTALL THE COMMSERVE SOFTWARE Page 1 of 35 Quick Start Guide TABLE OF CONTENTS This Quick Start Guide is designed to help you install and use a CommCell configuration to which you can later add other components. COMMCELL ARCHITECTURE

More information

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2003

CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2003 CST8152 Compilers Creating a C Language Console Project with Microsoft Visual Studio.Net 2003 The process of creating a project with Microsoft Visual Studio 2003.Net is to some extend similar to the process

More information

Installation and Release Bulletin Sybase SDK DB-Library Kerberos Authentication Option 15.5

Installation and Release Bulletin Sybase SDK DB-Library Kerberos Authentication Option 15.5 Installation and Release Bulletin Sybase SDK DB-Library Kerberos Authentication Option 15.5 Document ID: DC00534-01-1550-01 Last revised: December 16, 2009 Topic Page 1. Accessing current bulletins 2 2.

More information

CS520 Setting Up the Programming Environment for Windows Suresh Kalathur. For Windows users, download the Java8 SDK as shown below.

CS520 Setting Up the Programming Environment for Windows Suresh Kalathur. For Windows users, download the Java8 SDK as shown below. CS520 Setting Up the Programming Environment for Windows Suresh Kalathur 1. Java8 SDK Java8 SDK (Windows Users) For Windows users, download the Java8 SDK as shown below. The Java Development Kit (JDK)

More information

Your password is: firstpw

Your password is: firstpw SHARE Session #9777: WebSphere and Rational Developer Hands-on-Labs Building Java application on System z with RDz Lab exercise (estimate duration) Part 1: Your first Java application on z/os (~35 min).

More information

Artix Orchestration Installation Guide. Version 4.2, March 2007

Artix Orchestration Installation Guide. Version 4.2, March 2007 Artix Orchestration Installation Guide Version 4.2, March 2007 IONA Technologies PLC and/or its subsidiaries may have patents, patent applications, trademarks, copyrights, or other intellectual property

More information

Installation and Upgrade Guide Zend Studio 9.x

Installation and Upgrade Guide Zend Studio 9.x Installation and Upgrade Guide Zend Studio 9.x By Zend Technologies, Inc. www.zend.com Disclaimer The information in this document is subject to change without notice and does not represent a commitment

More information

TIBCO BusinessConnect EBICS Protocol Installation and Configuration. Software Release 1.0 December 2011

TIBCO BusinessConnect EBICS Protocol Installation and Configuration. Software Release 1.0 December 2011 TIBCO BusinessConnect EBICS Protocol Installation and Configuration Software Release 1.0 December 2011 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED

More information

Using the JSON Iterator

Using the JSON Iterator Using the JSON Iterator This topic describes how to process a JSON document, which contains multiple records. A JSON document will be split into sub-documents using the JSON Iterator, and then each sub-document

More information

Getting Started (1.8.7) 9/2/2009

Getting Started (1.8.7) 9/2/2009 2 Getting Started For the examples in this section, Microsoft Windows and Java will be used. However, much of the information applies to other operating systems and supported languages for which you have

More information

Module 4: Working with MPI

Module 4: Working with MPI Module 4: Working with MPI Objective Learn how to develop, build and launch a parallel (MPI) program on a remote parallel machine Contents Remote project setup Building with Makefiles MPI assistance features

More information

In this lab, you will build and execute a simple message flow. A message flow is like a program but is developed using a visual paradigm.

In this lab, you will build and execute a simple message flow. A message flow is like a program but is developed using a visual paradigm. Lab 1 Getting Started 1.1 Building and Executing a Simple Message Flow In this lab, you will build and execute a simple message flow. A message flow is like a program but is developed using a visual paradigm.

More information

TIBCO BusinessConnect ConfigStore Management Interface Protocol Installation. Software Release 1.0 February 2010

TIBCO BusinessConnect ConfigStore Management Interface Protocol Installation. Software Release 1.0 February 2010 TIBCO BusinessConnect ConfigStore Management Interface Protocol Installation Software Release 1.0 February 2010 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF

More information

ZENworks 2017 Update 2 Endpoint Security Utilities Reference. February 2018

ZENworks 2017 Update 2 Endpoint Security Utilities Reference. February 2018 ZENworks 2017 Update 2 Endpoint Security Utilities Reference February 2018 Legal Notice For information about legal notices, trademarks, disclaimers, warranties, export and other use restrictions, U.S.

More information

Maintain an ILE RPG application using Remote System Explorer

Maintain an ILE RPG application using Remote System Explorer Maintain an ILE RPG application using Remote System Explorer ii Maintain an ILE RPG application using Remote System Explorer Contents Maintain an ILE RPG application using Remote System Explorer.......

More information

Using the Prime Performance Manager Web Interface

Using the Prime Performance Manager Web Interface 3 CHAPTER Using the Prime Performance Manager Web Interface The following topics provide information about using the Cisco Prime Performance Manager web interface: Accessing the Prime Performance Manager

More information

Getting Started with Xpediter/Eclipse

Getting Started with Xpediter/Eclipse Getting Started with Xpediter/Eclipse This guide provides instructions for how to use Xpediter/Eclipse to debug mainframe applications within an Eclipsebased workbench (for example, Topaz Workbench, Eclipse,

More information

FUSION REGISTRY COMMUNITY EDITION SETUP GUIDE VERSION 9. Setup Guide. This guide explains how to install and configure the Fusion Registry.

FUSION REGISTRY COMMUNITY EDITION SETUP GUIDE VERSION 9. Setup Guide. This guide explains how to install and configure the Fusion Registry. FUSION REGISTRY COMMUNITY EDITION VERSION 9 Setup Guide This guide explains how to install and configure the Fusion Registry. FUSION REGISTRY COMMUNITY EDITION SETUP GUIDE Fusion Registry: 9.2.x Document

More information

ActiveSpaces Transactions. Quick Start Guide. Software Release Published May 25, 2015

ActiveSpaces Transactions. Quick Start Guide. Software Release Published May 25, 2015 ActiveSpaces Transactions Quick Start Guide Software Release 2.5.0 Published May 25, 2015 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED

More information

bs^ir^qfkd=obcib`qflk= prfqb=clo=u

bs^ir^qfkd=obcib`qflk= prfqb=clo=u bs^ir^qfkd=obcib`qflk= prfqb=clo=u cçê=u=táåççïë=póëíéãë cçê=lééåsjp=eçëíë cçê=f_j=eçëíë 14.1 bî~äì~íáåö=oéñäéåíáçå=u This guide provides a quick overview of features in Reflection X. This evaluation guide

More information

Report Commander 2 User Guide

Report Commander 2 User Guide Report Commander 2 User Guide Report Commander 2.5 Generated 6/26/2017 Copyright 2017 Arcana Development, LLC Note: This document is generated based on the online help. Some content may not display fully

More information

StarTeam File Compare/Merge StarTeam File Compare/Merge Help

StarTeam File Compare/Merge StarTeam File Compare/Merge Help StarTeam File Compare/Merge 12.0 StarTeam File Compare/Merge Help Micro Focus 575 Anton Blvd., Suite 510 Costa Mesa, CA 92626 Copyright 2011 Micro Focus IP Development Limited. All Rights Reserved. Portions

More information

Supplement H.1: JBuilder X Tutorial. For Introduction to Java Programming, 5E By Y. Daniel Liang

Supplement H.1: JBuilder X Tutorial. For Introduction to Java Programming, 5E By Y. Daniel Liang Supplement H.1: JBuilder X Tutorial For Introduction to Java Programming, 5E By Y. Daniel Liang This supplement covers the following topics: Getting Started with JBuilder Creating a Project Creating, Compiling,

More information

Work Smart: Microsoft Office 2010 User Interface

Work Smart: Microsoft Office 2010 User Interface About the Office 2010 User Interface You can use this guide to learn how to use the new features of the Microsoft Office Ribbon. Topics in this guide include: What s New in the Office 2010 User Interface

More information

Supplement II.B(1): JBuilder X Tutorial. For Introduction to Java Programming By Y. Daniel Liang

Supplement II.B(1): JBuilder X Tutorial. For Introduction to Java Programming By Y. Daniel Liang Supplement II.B(1): JBuilder X Tutorial For Introduction to Java Programming By Y. Daniel Liang This supplement covers the following topics: Getting Started with JBuilder Creating a Project Creating, Compiling,

More information

Outlook Quick Start Guide

Outlook Quick Start Guide Getting Started Outlook 2013 Quick Start Guide File Tab: Click to access actions like Print, Save As, etc. Also to set Outlook Options. Quick Access Toolbar: Add your mostused tool buttons to this customizable

More information

Overview. Borland VisiBroker 7.0

Overview. Borland VisiBroker 7.0 Overview Borland VisiBroker 7.0 Borland Software Corporation 20450 Stevens Creek Blvd., Suite 800 Cupertino, CA 95014 USA www.borland.com Refer to the file deploy.html for a complete list of files that

More information

Apache Directory Studio LDAP Browser. User's Guide

Apache Directory Studio LDAP Browser. User's Guide Apache Directory Studio LDAP Browser User's Guide Apache Directory Studio LDAP Browser: User's Guide Version 2.0.0.v20180908-M14 Copyright 2006-2018 Apache Software Foundation Licensed to the Apache Software

More information

CSCI 161: Introduction to Programming I Lab 1a: Programming Environment: Linux and Eclipse

CSCI 161: Introduction to Programming I Lab 1a: Programming Environment: Linux and Eclipse CSCI 161: Introduction to Programming I Lab 1a: Programming Environment: Linux and Eclipse Goals - to become acquainted with the Linux/Gnome environment Overview For this lab, you will login to a workstation

More information

ZENworks Reporting System Reference. January 2017

ZENworks Reporting System Reference. January 2017 ZENworks Reporting System Reference January 2017 Legal Notices For information about legal notices, trademarks, disclaimers, warranties, export and other use restrictions, U.S. Government rights, patent

More information

Migration to Unified CVP 9.0(1)

Migration to Unified CVP 9.0(1) The Unified CVP 9.0(1) requires Windows 2008 R2 server. The Unified CVP versions prior to 9.0(1) run on Windows 2003 server which do not support the upgrade to Unified CVP 9.0(1). Unified CVP supports

More information

Halcyon Spooled File Manager GUI. v8.0 User Guide

Halcyon Spooled File Manager GUI. v8.0 User Guide Halcyon Spooled File Manager GUI v8.0 User Guide Copyright Copyright HelpSystems, LLC. All rights reserved. www.helpsystems.com US: +1 952-933-0609 Outside the U.S.: +44 (0) 870 120 3148 IBM, AS/400, OS/400,

More information

TIBCO ActiveMatrix BusinessWorks Installation

TIBCO ActiveMatrix BusinessWorks Installation TIBCO ActiveMatrix BusinessWorks Installation Software Release 6.2 November 2014 Two-Second Advantage 2 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED

More information

Working with Mailbox Manager

Working with Mailbox Manager Working with Mailbox Manager A user guide for Mailbox Manager supporting the Message Storage Server component of the Avaya S3400 Message Server Mailbox Manager Version 5.0 February 2003 Copyright 2003

More information

How to install the software of ZNS8022

How to install the software of ZNS8022 How to install the software of ZNS8022 1. Please connect ZNS8022 to your PC after finished assembly. 2. Insert Installation CD to your CD-ROM drive and initiate the auto-run program. The wizard will run

More information

Installation and Upgrade Guide Zend Studio 9.x

Installation and Upgrade Guide Zend Studio 9.x Installation and Upgrade Guide Zend Studio 9.x By Zend Technologies, Inc. www.zend.com Disclaimer The information in this document is subject to change without notice and does not represent a commitment

More information

System Administration

System Administration Most of SocialMiner system administration is performed using the panel. This section describes the parts of the panel as well as other administrative procedures including backup and restore, managing certificates,

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ Installation

TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ Installation TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ Installation Software Release 7.6 November 2015 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE.

More information

Getting Started with Web Services

Getting Started with Web Services Getting Started with Web Services Getting Started with Web Services A web service is a set of functions packaged into a single entity that is available to other systems on a network. The network can be

More information

Integrating IBM Security Privileged Identity Manager with ObserveIT Enterprise Session Recording

Integrating IBM Security Privileged Identity Manager with ObserveIT Enterprise Session Recording Integrating IBM Security Privileged Identity Manager with ObserveIT Enterprise Session Recording Contents 1 About This Document... 2 2 Overview... 2 3 Before You Begin... 2 4 Deploying ObserveIT with IBM

More information