Select A Section home public service database mgr. data access data modeler site notes |
Currently In This Section CoreReader Advanced Operations Manual ( Please Scroll Down ) |
Pages In Section summary data sources user manual advanced ops data gateway downloads |
This is a legally copyrighted work, ( Has not been substantially updated in years. A report of erroneous or misleading information will be appreciated. ) __________________________________________________ Beginners; Please stop here and go to the basic documentation. None of these features are needed for point-and-click queries. Basic knowledge is required to understand these features.
__________________________________________________ . . Chapter
============================================ Copyright ============================================ CoreReader Technology
__________________________________________________ Chapter Document Description __________________________________________________ The CoreReader documentation is in three books. Book 1:
The point-and-click character of CoreReader allows a beginner to benefit from the system without mastering any documentation. The section on connecting to data sources is probably necessary, but the other sections can wait until the beginner wants them. Book 2:
Book 3:
__________________________________________________ Chapter System Configuration __________________________________________________
_________________________ CoreReader may be configured to your needs by changing values in the configuration screen. On the main screen, press the "configuration" button to open the configuration screen. Change the parameters to values that you want and press the save button. System configurations are maintained by CoreReader in tables in disk files. Those tables may be manually edited, but manual editing is not recommended. If a table is corrupted, it should usually be deleted to allow CoreReader to create a new table with default values.
_________________________
System Configuration The font settings are applied to the data browser, the text data screen, the SQL frame, the documentation, and the query lists.
System Configuration Changing this setting will immediately alter the appearance and operation of CoreReader to correspond to a new level of operator competency.
When somebody reports that CoreReader is having trouble running a query, that may indicate that he is attempting to operate above his skill level. That is not necessarily a bad thing because CoreReader is a great teacher, but the operator must be prepared for problems. The operator who operates above his true level of competency may be confused and may produce misleading information. Non-technical level:
Novice level:
Intermediate level:
Proficient level:
This level of operation is for those who work with enterprise level database servers and who write SQL statements freehand beyond those of CoreReader's constructs. To those who qualify for this level of competency, CoreReader will be useful mainly for its speed, convenience, and universal access.
System Configuration Turning this on will allow multiple data sets to be displayed. Before enabling multiple data browsers, see the discussion of dangers in the multiple data set section.
System Configuration CoreReader displays a prompt screen for the new user when loaded. If it has been disabled, it can be re-enabled by setting this parameter to "yes".
System Configuration By default, the connection assistant is displayed for the new user, but it is easily disabled. If it has been disabled, it can be re-enabled by setting this parameter to "yes".
System Configuration The system installs with examples of various kinds of connections in its database. When this parameter is set to no, the examples will no longer be displayed. (Examples can be created and deleted from the database on the examples panel.)
System Configuration If this option is set to "yes", the password will be saved with the other parameters when the connection is saved. The default is "yes" to reduce beginners' confusion. Since the connections are intentionally saved in unencrypted text, this setting may be changed.
System Configuration The network type is normally set to local. When operating across the internet, read the internet section for help in determining the correct setting. Internal operations are changed greatly by changing this setting. When changing from internet mode to local mode, if the system has an operational connection, the connection will be broken.
System Configuration The command line interface is covered in depth in its own section of the advanced topics. CoreReader's security disables this interface by default and it must be enabled to be used. While disabled, the system cannot be run by batch files or external systems.
_________________________ Gateway settings can be entirely ignored in most CoreReader installations. They are needed only when the Gateway will be used. The Gateway server is addressed in depth in the Server Documentation. To alter settings, press the admin button to display the administration screen, and then press the gateway button to change the gateway operations.
System Configuration This toggle enables and disables the gateway. CoreReader's security disables the Data Server during installation and it must be enabled before other systems can use it. Note that this controls gateways for the entire installation. All computers that use the database will respect this value. If it is on, then multiple gateways may be run. Do not start a job server on a machine on which a gateway server runs. This and other aspects of the Gateway server are addressed in depth in the Server Documentation.
System Configuration This toggle allows or disallows the acceptance of SQL statements by the gateway. If this is set to no, the gateway server will run only saved queries which must be specified by name. This and other aspects of the gateway server are addressed in depth in the Server Documentation.
System Configuration This is the maximum number of connections to data sources that can be open concurrently. The minimum allowed is 3 because at least one connection is devoted to CoreReader's database, and sometimes 2 are required. The maximum is 1000 simply to avoid a runaway condition. The default is 100. Very small computers may need a lower setting. Very large server installations with heavy use and with a great deal of physical resources may require a higher setting. This and other aspects of the data gateway are addressed in depth in the Server Documentation.
System Configuration This toggle allows or disallows the acceptance of commands to the gateway itself via SOAP. CoreReader's security turns this off during installation. This and other aspects of the data server are addressed in depth in the Server Documentation.
System Configuration This toggle allows or disallows the acceptance of SQL statements via SOAP by the gateway server. If this is set to no, the gateway will accept only the names of saved queries via SOAP. CoreReader's security turns this off during installation. This operates independently of the free-form SQL toggle. This and other aspects of the data server are addressed in depth in the Server Documentation.
System Configuration CoreReader contains a security service which can be told to reject all SQL update and DDL commands that are sent through the gateway server by setting this parameter to no. The installation default is yes. The GUI and the gateway server have separate update settings. Neither effects the other. Turning this on by setting it to yes will produce a slight increase in the gateway speed because it will stop checking sql statements for updates.
System Configuration When this is turned on, the gateway will log its operations. There will be no record of operations if it is off. This setting is separate from the GUI log setting. The default is on.
System Configuration The gateway server shares this parameter with the entire installation. See the Log Failover sub-section under the GUI Logging section.
System Configuration This setting allows detailed logging. When it is off, only the most basic events are recorded. This setting is separate from the GUI log setting. The default is on.
System Configuration This setting allows the accumulation of operating statistics for the the gateway. When turned on, each gateway will accumluate statistics until it is unloaded. At that time, it will add its statistics into the system statistics table.
System Configuration If not blank, a Gateway can run only on the specified computer. If blank, Gateways can run on all computers. The default for this setting is blank.
System Configuration If not blank, a Gateway can run only under the specified login ID. If blank, Gateways can be run by any login. The default for this setting is blank.
_________________________
System Configuration A report file name may be entered here. If left blank, the default name of the output file is query. Do not enter a suffix. CoreReader assigns a suffix that is appropriate to each type of output file. This file name will be used for all of the outputs that are written to disk. See the various sections for details.
System Configuration Output locations for text, web, and XML may be specified using a drive or an UNC. It is not necessary to specify those three locations. If left blank, those three locations all default to the standard output location under the CoreReader exe.
System Configuration If the output is sent to a database, this is the connection string which will be used by the report module to connect to the output database. Refer to the section on outputting to a database for details.
System Configuration If the output is sent to a database, this is the name of the table into which the output will be inserted. Refer to the section on outputting to a database for details.
System Configuration If the output is sent to a database, this is the type, or brand name, of the database manager. If it is one of those that are listed in the connection screen, then this name must be a precise match for one of those so CoreReader will know how to handle it. Refer to the section on outputting to a database for details.
System Configuration If the output is sent to a database, this is the type of the connection. It must be ODBC or OLEDB. Refer to the section on outputting to a database for details.
System Configuration Values entered here will become centered headers on web pages and on the text output. The headers will be enlarged stressed when sent to HTML. Each header can be up to one thousand characters, but on text output, will be truncated to the width of the report.
System Configuration Values entered here will become centered footers on web pages and on the text output. Each footer can be up to one thousand characters, but on text output, will be truncated to the width of the report.
System Configuration Linking into a web site is done through the return page parameter. Usually, this parameter is the name of the web page which will be loading the CoreReader web page. The CoreReader page will carry a link back to that page. See the output type section for a discussion of using CoreReader to create web output.
System Configuration Since XML and HTML do not require line breaks, CoreReader does not insert them by default. Line breaks can be toggled on and off for each. The recommended setting for SOAP service is off. Line bread toggles were implemented in release 21008.
System Configuration If a header file is specified, CoreReader will include it in the report as a header. It will replace CoreReader's html, title, header, and body tags. If the specified file is not found, an operator runtime error will be raised.
System Configuration If a footer file is specified, CoreReader will include it in the report as a footer. It will assume responsibility for closing the html and body tags. If the specified file is not found, an operator runtime error will be raised.
_________________________
System Configuration Speed is increased and system wear is reduced by buffering CoreReader's log. That greatly enhances multi-system operations. There is a downside to that feature. If there is a computer crash, the buffer contents will be lost so the final log entries will be lost.
System Configuration Identification numbers of various instances will be found in the log. When one of the systems or sub-systems is opened, he creates a unique identifier for himself. Thus, each running instance has an identifier which can be used to identify the instance. In the case of the database managers, they will tell you their ID when asked for it. The identifier is case sensitive. Each instance ID remains valid for as long as the instance runs and may be used to identify extant instances. When an instance terminates, the ID disappears. Pursuant to my personal standard of ethics and unlike the numbers that are generated by the big name brand, these instance ID's cannot be used to identify another system, hardware, platform, organization, or individual. These numbers are what they claim to be and are no more.
System Configuration Toggles logging on and off. Please read the logging section before enabling logging.
System Configuration Toggles verbose logging on and off.
System Configuration This sets the maximum size to which the log will be allowed to grow. This is the number of records. When that size is reached, the log will be truncated. Normally, database truncation is done when a system is loaded.
System Configuration Toggles statistics logging on and off. Please read the logging section before enabling logging.
System Configuration The purpose of the local display of the log is to permit viewing by those who need to see it, such as job maintainers, but who are not system administrators. The system log is displayed from the Configuration screen simply because there is currently not a better location. For those with access to the system database, it might be better to display the log in notepad because the database is designed for such access. The log can be displayed only if access has been granted. The display uses the AxleBase row parameter in the SQL statement to avoid overwhelming the local GUI. Each time that the display button is pressed, the value will increase. The recorded workstation will not be displayed, but the operator name must be displayed for debugging jobs. At over ten thousand characters per record, the log records are too large to be displayed. To avoid that problem, as AxleBase returns each record, CoreReader removes the trailing spaces from it. That usually allows several hundred records to be displayed, so the return parameter defaults to 300. If the records are very large, that value will need to be decreased to display them.
System Configuration When a new system is installed, it can service its log without configuration. Therefore, any system that is pointed to a database can service the log by default. The same is true of gateway servers. This default is designed to allow the neophyte to get up and running quickly. In a multi-user system where many people and servers can be using the system, this behaviour can be altered. If the "system server service only" configuration is changed to yes, then only a job server can truncate the log. It will truncate the log through the use of the CoreReader internal utility which is set up for that purpose. Normally, database truncation is done when a system is loaded and that should be adequate for an installation that uses only the GUI. However, an installation that uses a gateway may create large logs between loads which might be serviced by an unattended job server. Another reason for using the server to service the log is that if the log has reached the target size, the truncation removes all of the log records. The server can make a backup before running the truncation.
_________________________
System Configuration When loading, CoreReader normally immediately displays the connection screen so the operator can connect to a database. But if autoconnect is turned on, he bypasses the connection screen load and immediately attempts to make the specified connection. No message is displayed. This feature works in conjunction with the autoconnect name parameter. Use of this feature for internet work is extremely problematic. It should never be used on a new connection before testing. To avoid some operational problems which can be confusing for the operator, this cannot be turned on when the job server is enabled.
System Configuration If CoreReader finds that autoconnect is enabled when he loads, and if he then finds an autoconnect name, he bypasses the connection screen load and immediately attempts the specified connection. No message is displayed. Obviously, the name must match an extant connection name. Be certain that the name is spelled precisely like the name of the connection.
System Configuration If this feature is enabled, CoreReader will attempt to automatically load the database every time that any valid data connection is established. Using this feature over the internet is inadvisable. Use of this feature on a new data connection is extremely inadvisable. Also, if this feature is turned on, when errors occur, it may be difficult to determine their source if they came from the data socket or the data source. This is a multi-valued setting as discussed in the configuration section's header. The multi-valued nature of this parameter can be confusing. Auto-load will run when the system is loaded only if the primary parameter is set to yes. The auto-load setting in the connection is used only when a connection is made after the system is loaded. To avoid some operational problems which can be confusing for the operator, this cannot be turned on when the job server is enabled.
System Configuration If this feature is enabled, CoreReader will attempt to automatically run the specified query each time that a database is loaded.
System Configuration This is the name of the query that is to be run by the autoquery feature. CoreReader will look for the name in the query table. If it is not found, an operator error will be raised.
System Configuration This is the output specification for the autoquery. This is required if the autoquery is enabled.
System Configuration This setting may be ignored for nearly all single-user operations. The default is five seconds. Based upon my experience with enterprise-level inter-system work, this should be more than adequate for most systems. This setting is not only useful, but sometimes vital when the job server is used. For example, another system will sometimes have a CoreReader delivery file open to read or operate on it when CoreReader is ready to deliver the next set of data. Normally, the entire exchange would take place in milli-seconds and the second system would return an error message because it could not use the file as needed. CoreReader has internal spinlocks which will continue to try the file for the number of seconds specified by the spinlock setting. If the other system releases the file within that number of seconds, CoreReader will continue to work. If the release does not happen within that time, an error message will be generated. ( This spinlock setting does not apply to the internal CoreReader database tables. That spinlock is set to 15 seconds upon installation and can be changed only by someone who is familiar with AxleBase operations.)
System Configuration This setting is critical to the Job Server operation and is discussed in the The Job Manager And Job Server chapter's miscellaneous parameters.
System Configuration This is a multi-part and multi-valued parameter. The two parts are the computer name and the server name. Each may have multiple values. The presence of a computer name is a toggle. If it is present when CoreReader is loaded on the specified computer, the departmental job server will automatically load and begin running. The computer parameter is multi-valued to allow multiple job server recoveries. The computer names must be separated by the vertical bar which is referred to as a pipe. The server name assignment is optional. If present, that name will be assigned to the job server when it starts. If a name is present, then the total number of name separators, the vertical bar, must equal the number of separators used in the computer name parameter. This insures that the server names correctly match the computer names. Be certain that other automatic load features are turned off on the designated computers. When the server automatically loads, the execution of the system startup is halted at that point until the server is stopped. This can sometimes cause unpredictable events in the GUI and is the price for the ability to autoload. ( See also the additional discussion in the Job Server chapter.)
System Configuration Provided for general use by the system where a specific timeout is not provided. The default is 1800 seconds.
_________________________ The example panel in the config screen allows deletion and creation of job examples and connection examples. When the system is installed, it creates a complete set of examples in its database. Pressing the delete button will delete all of them. If they are needed later, pressing the create button will create all of them. If the toggle to show connection examples is turned off, then the examples will not be displayed even if they are in the database. The system loads some data from its database when it is started, so it should be restarted after deleting or creating examples. Each time that the create button is pressed, it creates a complete set of examples, so it is possible to have multiple entries in the database. The database should be purged now and then to remove deleted records that are empty.
_________________________
System Configuration Ignore this section if you use CoreReader alone. CoreReader may be configured and deployed for multi-user operations in small work groups to provide management, supervision, and sharing. The connections, queries, and jobs are private to the person who saved them. However, they can be shared by the owner. Operators can retrieve and execute shared queries, but they can edit, save, and delete only their own queries. The job server can become an even more powerful tool in a shared environment. Each person can develop his own jobs and can then share them. When the job server is run as a departmental server, it will run all of the shared jobs.
System Configuration CoreReader installs in single-user mode. An installation becomes multi-user when the users are directed towards a shared database. As each new CoreReader is installed, it can be directed to the shared database. The process is quick and simple, but the possible errors require care. Do not copy a CoreReader database. The embedded database manager maintains pointers to various objects and an invalid pointer can erase or corrupt the database. The CoreReader Database chapter gives detailed instructions for moving a database and for directing people to it.
System Configuration The disturbing trend in management to revert to the old mainframe mentality must be addressed. Many managers have found that they are unable to control the creative environment of the personal computer and the personalities that it attracts. Therefore, some organizations are reverting to mainframe-type environments through systems such as Citrex Servers. The next step, of course will be the old-fashioned dumb terminals with some kind of upbeat new name and that re connected to something that will not be called a mainframe. If your organization has reverted to mainframe-type control, then your needs cannot be foreseen here, and you must ascertain for yourself what is needed to create your multi-user installation.
_________________________
System Configuration CoreReader is not secure against a malicious person employed in the organization. The permissions are designed to protect the system from carelessness and ignorance. All permissions are enabled by default so that neophytes do not receive additional stress as they begin learning about the world of data. It is up to the administrator to disable permissions for new people. That means that the installation of CoreReader on a new employee's computer with a pointer to the departmental database will give him unlimited access to the system. Remember that this is by design and is not a shortcoming. The administrator should prepare a set of permissions before opening the system to him. The administrator should probably leave all of his own permissions enabled so that he can access all of the system. If a permission is turned off, the system will usually fail silently. It will appear to the operator that the system is not working.
System Configuration To administer permissions, press the permissions button to display the permissions screen. When the permissions screen opens, the drop down list at the top of the screen is filled with the names of everybody who has settings on file. If a new list is needed while working, reload the screen. A name can be selected from the list or a name can be typed in. The name must be the person's unique login ID. After selecting or entering a name, press the load button. That will load all of the permission records for that person. Note that the system allows the load of permissions for a person who does not yet exist. Enter a new name and press the load button to display a complete set of permissions for the new person. This allows the administrator to prepare the system in advance for a new person. When entering a new name, insure that it is spelled correctly. Since the system has no way to validate it, this is a good way to enter a lot of junk records into your system. The purge button purges all of a person's permission and configuration records from the system. It may be used at any time. All that it needs is for the name of the person to be entered. The purge button is provided for the removal of obsolete records. If the intent is to lock a person out of the system, do not purge his records. If the records are purged, when that person opens the system, CoreReader will think that he is new and will give him a set of defaults with total access. To lock a person out, change the system access permission record from yes to no. To change a permission, load the permission records and click the record to highlight it. Then press the "change" button to change that record. That can be repeated for all records that need to be changed. Simply double clicking a permission is a faster way to change it. Then press the save button.
System Configuration The system permissions is a list of rights to access objects. For example, at the time of this writing, the objects are:
Each person has a complete set of permissions. If none are found on file for him, then the system assigns all of the defaults. Each record has five different permissions. They are:
On the permission screen, the state of each permission is displayed as a Y or an N, which is yes or no. The gateway server and the job server can run under a login ID only if that ID has execute permission for the object. (Do not forget to also set the applicable configurations to allow execution under that ID) Security can be so daunting that the following suggestions are offered to get a new adminstrator headed in the right direction.
__________________________________________________ Chapter The CoreReader Database __________________________________________________
_________________________ CoreReader uses the AxleBase embedded database manager to manager his database. He passes commands and SQL statements to the database manager which inserts and retrieves data and manages the database. For additional information about the database manager, see the documentation on the http://www.AxleBase.com/ web site. The database manager is not intended to be a server. It is an embedded system within CoreReader. The CoreReader database contains all of the system configuration information and operational information such as saved connections and saved queries (stored procedures). In the CoreReader database will be found all of the tables that are used by CoreReader. Their format is obvious and editing them by hand is possible but not recommended. If a record structure or data is not what the database manager expects, the results will be unpredictable.
_________________________
The CoreReader Database The installation defaults for the CoreReader database and the database domain are:
The default database domain name is domain. It should be changed only by an experienced administrator because changing it can wipe out the database. Note that the name of the last directory in the path is also the name of the domain. That is a requirement. The CoreReader database name is CoreReader and cannot be changed.
The CoreReader Database The location should not be changed unless you are a CoreReader administrator or you are familiar with how the embedded database manager operates. A CoreReader database cannot be copied to a new location. The internal database manager maintains pointers to various locations, and if one of them is invalidated, the system will cease functioning. The change will preserve the old database in case there is a problem. If others are using the system, or if servers or gateways are located on other computers, then their pointer files must be immediately changed before they re-enter the system. If the pointers are encrypted, they must be decrypted before beginning the re-location. Then, unload the system and reload to insure decryption. This is necessary so that the process can be watched for errors. If either the database location is changed or the domain is changed, then the other must also be changed. They must be simultaneously changed to new locations. Each contains pointers to the other which can be updated only by an experienced database administrator, so if only one is moved, they both become invalid with unpredictable results. When both are changed, the net result is that new ones are created in the new locations. The pointer file and its use are very simple. Because of its importance and because it is so easy to make a mistake with it, the pointer file is covered in its own section of this chapter. Please read that section before moving the database. The new locations must be ones that do not exist. As a security precaution, the database manager will not use an existing location. It is recommended that the path to the domain and the path to the database be identical except for the final directory to facilitate administration. The final directories must be different because that is where the data resides. The default directory for the database is \corerdr\ and the default for the domain is \domain\. Although they can be changed, those should be used for the new locations. Do not locate the database under the domain, and do not locate the domain under the database. The database manager does housekeeping chores at his discretion and may delete extraneous objects from the domain and from the database location. There are several things that have nothing to do with moving the database, but which can make everybody extremely unhappy if neglected. Before beginning a move:
To move your database:
CoreReader moves existing data files after the new objects are created. If something goes wrong and there is no old data to preserve:
If something goes wrong and old data was corrupted:
If the new objects are created, but are not what was desired, just delete the pointer file. When CoreReader starts, he will create a new pointer file which points to the old database. An example: Assume that the defaults are now being used.
A word of caution. This seems like a difficult and involved procedure, but when it happens, it happens very fast. Therefore, after you get the hang of it, it would be very easy to suddenly have CoreReader databases scattered all over your network, which could become confusing.
The CoreReader Database (Reminder: Be sure that the permissions have been set for the new person before pointing his system to the new database. See the discussion in the Permissions section of the System Configuration chapter.) As a security precaution, the embedded database manager requires that a shared database have the same path from all locations. It cannot be k:\corereader\ on one workstation and m:\corereader\ on another. If it is \\server\corereader\ on one, then it cannot be k:\corereader\ on another. The database must be centrally located before it is shared. If a workstation is pointed to it and it does not exist at that location, the workstation will create it there. To share a database, the CoreReader on the workstation must be unloaded. Then change the pointer file so that it points to the shared database. (See the Pointer File section.) When CoreReader is loaded on that workstation, he will connect to the shared database. If the administrator has a master pointer file prepared, he can install CoreReader on the workstation and immediately place the pointer file in the CoreReader directory. When CoreReader starts, he will immediately connect to the shared database. He will not even create a local database on the workstation.
_________________________ The pointer file is a simple matter that can beome problematic if not completely and adequately explained, so it will here be given more coverage than it warrants. Please read it. After you read it, you will notice that it is all very simple, but forgetting a detail can cause problems. Each system has a file in its startup location named pointer.txt. The sole function of that file is to tell the system where its database is located when it starts up. The system reads the location and immediately queries that location for its data. Interestingly, the pointer file can be ignored in almost all installations. It will become important only in multi-user installations or when the database must be relocated. Every time that CoreReader starts, he reads his pointer file. No exceptions. After reading, verifying, and storing the values from it, he again writes it back to disk. No exceptions. This simple fact can become important to an administrator. If verification procedure finds that the location, the name, or the domain has been corrupted, he will change the values to the default before writing back to disk. For most CoreReader installations, that means that operation will continue as usual, but if he was pointed to a different database before the corruption, he will suddenly create a local database and begin using it. The database location can be manually changed with notepad or can be changed in the local system administration screen after system installation. If the change is made in the admin screen, the system will request an immediate shutdown. When he is reloaded, he will connect to the new database. To manually change to a different database, just change the two locations in the pointer file. When the system starts, he will immediately connect to the new database. The two locations are the location of the database and the location of its domain controller. Their default locations for most installations are:
If changes to the pointer file are invalid, or accidentally corrupted, delete it. When CoreReader starts, he will notice that the file is missing and he will re-write it with the default values. Since all systems must use an identical pointer file, a master pointer file may be made for new installations. After each system is installed, that pointer file can be copied into the exe location before starting the new system. On startup the new system will immediately connect to the specified database. A multi-user installation must have identical pointer files on all workstations. If one uses j:\server4\corerdr\ for the database, then another cannot use m:\server4\corerdr\ . If one uses \\server4\c\corerdr\, then all must use that string instead of mapping to it. This is a security requirement of the embedded database manager. Having the pointers in a local text file means that everybody can find the database, so CoreReader allows encryption of the pointer files. The encryption toggle can be found in the admin screen. When it it turned on, the next time that a system is started on the network, it will encrypt its pointers. As noted elsewhere, CoreReader security is not intended to be a defense against a malicious and determined employee. This is a light encryption.
_________________________ Like all databases, normal operation introduces wasted space in tables. To clean up the database and to recover unused space, press the purge button on the administration screen. (For extended discussion of the purge, see the documentation on the AxleBase site.) Slugishness of the system load is usually a sure sign that it is time to purge. A screen will hang an extra long time while records are loaded from the database. Before running a purge:
Other database maintenance is handled by the embedded database manager. Some database managers purge their database during the backup. AxleBase is designed to backup without a purge to give the operator control of the operation. If a table becomes so badly corrupted that the database manager cannot repair it, and there are no backups, the table's .dat file may be deleted. The database manager will re-initialize it and continue operations. All data, of course is thereby lost. If the database becomes unuseable and there are no backups, as a final resort close CoreReader and delete the database. CoreReader will reinitialize the database on startup. All data will be lost. (Regular backups are recommended.) When a purge is run by pressing the button performs a bit of house cleaning that the job server will not do. When the button is pushed, before it tells the database manager to clean the database, it first deletes obsolete records that have been rendered obsolete by continued development of CoreReader.
_________________________ You will notice that, when CoreReader is started the first time, he creates a directory under his location named \backup. He will not use that directory and it is provided as a convenience for your backups. Since a remote location would be more secure, that location is not recommended, but it is there if you want it.
The CoreReader Database When manually doing a backup copy, the database and the domain must be copied simultaneously. If only the database is copied, the database may be corrupted with resultant loss of data. Simply copying the db directory in most CoreReader installations will get everything. Unload all CoreReader systems including the local system that use the database before executing the backup.
The CoreReader Database Before executing a system backup, the system must be configured for backup in the configuration screen. If it is not correctly configured, then the backup will fail or may not deliver expected results. CoreReader maintains his own backup files. The parameters that are discussed below tell him how you want him to do that. Parameters are verified when saved and are not re-verified thereafter. Therefore, if the backups are to be done by the Job Server, they must be tested before being put into production and should be checked now and then. Errors will be logged. There are two types of system backups:
Before making a backup, the target path must be entered into the configuration panel. CoreReader will let you know whether or not he can see it when you save it. If making generational backups, the retention period must be selected in the configuration panel. That will set the maximum amount of time that he will maintain a backup on file. Expired backups are deleted. After the system is configured, the backup is executed by pressing the button on the administration screen. That's all that there is to it. Each time that you want to do a backup, display that screen and press the button. Unload all CoreReader systems that use the database before executing the backup. If a system other than the backup system is running during the backup, the backup can be corrupted. It is a good idea to sometimes validate the backup process by inspecting the backup files. Many times in my career, I have witnessed and heard of system administrators failing to restore from an expensive backup system because they failed to insure that the system was functioning properly. When CoreReader is told to backup, he forwards the command with parameters to AxleBase. This has saved development time and increased the solidity of your CoreReader. Any generated errors will return from AxleBase.
The CoreReader Database Before restoring a data object from the archives, the CoreReader system must be shut down and all systems that use the database must be shut down. Restoring during operation may result in corrupted data. To restore a data object, copy the object from the archive over the top of the corrupt object. That applies to all objects regardless of size. To restore only a particular table, the file will have the table name with a dat suffix. If the entire database must be restored, then the domain database must also be simultaneously restored. Delete the old \corerdr and \domain directories and copy the backups to their location. If the entire installation is lost to hardware failure, then the last backup can be used to bring it back on line in new hardware. To do that, copy the last complete backup to the new location. The new location must must be configured exactly like the lost location. For example, if the old database was located in c:\program files\db\, then the new location must be c:\program files\db\. For a complete discussion of the nature of the CoreReader database, refer to the AxleBase documentation.
_________________________ To reduce program development and to give CoreReader the adavantages of true database management, the AxleBase database manager is embedded in the system. AxleBase is invisible to the those who use CoreReader and normally needs no attention. It is possible to reconfigure AxleBase as it operates within CoreReader. Reconfiguration can cause data loss or even the loss of the entire database, so it is not recommended. Also, the CoreReader database is designed for the special needs of CoreReader and he configures his database accordingly when he is installed. Deviations from that design can degrade system performance or halt the system entirely. For system administrators who are responsible for such duties and who need to alter the system, the AxleBase documentation is available at http:\\www.AxleBase.com .
__________________________________________________ Chapter Moving Data __________________________________________________
_________________________ Ignore this section if you do not need to run across the internet. Apparantly unknown to many people is the fact that the internet is merely a very low-tech primitive network. CoreReader uses the internet just as it uses any other network. It performs all operations across it with no problems. CoreReader can connect to any data source anywhere in the world that has an internet connection. Administrators take note if you have an unsecured server !
Moving Data To operate in internet mode, open the configuration screen. Change the network type to internet and save the new setting. CoreReader will connect with an internet domain name for the server name or a routable IP address in the server IP, whichever is most appropriate. All other connection parameters will be used normally. When configured for internet mode, all connections are operated in that mode. This will place an additional load on the local infrastructure during local operations. When CoreReader is shut down, he automatically reconfigures back to local network operation.
Moving Data The salient features of the internet are its speed, noise level, degree of reliability, and security. When compared to the local networks to which most of us are accustomed, the internet is slow, noisy, unreliable, and insecure. The operation of any application across the internet is impacted by all of those factors. During normal database operations, a connection to the database is created and that connection is maintained for the duration of the work session. That data connection is a software connection which is made possible by the reliability of the infrastructure. An unreliable infrastructure may break the data connection, and a broken data connection can cause inordinately severe problems for the participating systems. That includes CoreReader and the database server, and sometimes for the intervening support systems. Broken data connections can even cause computers to crash. Because the internet is unreliable and filled with noise, we may assume frequent breaks in data connections across it. The problem is avoided through the use of the network setting in the configuration screen. It is normally set to local, which tells CoreReader to work normally with data connections. When it is set to internet, CoreReader's internal operations change dramatically. ( As a safety precaution, CoreReader resets it to local when he is shut down. ) The data connection and the database load will be done as usual. The difference happens after the first query. CoreReader has been told that he is connected to an unreliable network, so he destroys the connection immediately after the query completes. Queries run as usual, and the data set is safe and can be used as usual, but CoreReader has unplugged from the server after getting the data. When the query button is again pressed, CoreReader recognizes that he is in internet mode and has unplugged from the server, so he finds the last connection that was made, re-connects, runs the new query, retrieves the data, and again disconnects. Thus, CoreReader relieves the participating systems of much exposure to internet vagaries. It is not a good idea to configure the system for internet operation if it is running on a local network. Repeatedly establishing connections puts an additional and unnecessary load on the database server and on the infrastructure.
Moving Data Internet loading is a complex subject that goes far beyond this brief assessment. If CoreReader is to be deployed organizationally, the subject should receive management consideration before deployment. The low speed of the internet can be addressed in each data connection because each has independent timout settings. Normally, the connection timeout and the query timeout are set to numbers below ten while running only on a local network. Be prepared to increase them dramatically for the internet to reduce timeouts while waiting for responses. The query timeout will be most sensitive to internet operation. It is difficult to advise specific timeout values due to the unreliable nature of the medium. The speed varies and is dependent upon factors such as geographical location, local demographics, time of day, community infrastructure, the political situation, etc. The only recourse is for each installation to experiment and to be prepared to change the values frequently. For internet operation, the operator should give additional thought to query construction to reduce superfluous data returns. Data sets that are routinely sucked across the local network can be monumentally large in internet terms. A return of a hundred thousand large records could take ages.
Moving Data When operating CoreReader on the internet, be prepared for frequent attack from script kiddies, highly knowledgeable hackers, and powerful anti-social organizations. The CoreReader activity generates the signature of a high level of sophistication which tends to attract attention. The client computer is actually at least risk. The knowledgable hackers are not usually interested in small fish, but go after the servers. The client computer is usually targeted by "script kiddies" who use programs that are written by the knowledgeable hackers. Therefore, the client should be expected to receive the greatest number of attacks, and the server should be expected to receive the most serious attacks. The database server should be running on a secure computer. Each database should be password protected. Not in lieu of, but additionally, internet operations should be run through VPN's. CoreReader's computer and the database server should both be behind firewalls. Before connecting an unsecured database to the internet, consider the fact that CoreReader is available world-wide.
_________________________ DTS ( Data Transfer Services ) is techno-speak for moving data around. Managers adore techno-speak. Knowing techno-speak is sometimes sufficient to get a job and I have personally found that not knowing the techno-speak is worse than not being qualified for the job. CoreReader can be used for:
See the data Output section; Database sub-section. See the Saved Queries ( Stored Procedures ) section. See the Job Server section. See the Datasource Server documentation. See the Database Mover section.
_________________________
Moving Data The command line interface was built for those who need to query databases without a human operator at the console; usually system administrators, database administrators, managers, and system engineers.
Moving Data In addition to the graphical user interface ( GUI ), CoreReader can be operated from the command line by an external system or DOS batch file. When CoreReader detects that startup method, it loads its command line interface ( CLI ). When the CLI is used, CoreReader can be run with or without an attending human operator. CoreReader security disables the CLI by default. It must be turned on internally before it can be used. It is turned on by changing the toggle in the system configuration screen. CAUTION!
The CLI controls the GUI load. If the CLI sets the attended operation parameter to false, the GUI will not load, and the system will shut down after the CLI finishes. If the attended operation parameter is true, the GUI will load. The CLI loads, executes all commands, and unloads before the GUI loads. When a needed startup option is not used, the CLI will default to the system settings. Commands passed with the startup will over-ride the system settings. When the CLI completes its operations, it resets all parameters to their original values. The operator must insure that the system settings are consonant with the CLI's objectives. It is critical for all operations to be tested first from the GUI interface. The highly configurable nature of CoreReader makes it extremely difficult to predict the behavior that will be induced by external commands. When errors occur during operation of the CLI, the system parameters should be checked to insure that resets were successful. Verbose activity logging is recommended during CLI operations because errors and anomolous behaviour may not be displayed. Verbose logging can be turned on in the configuration screen. If any error occurs during the CLI execution, it will attempt a system shutdown regardless of the GUI toggle. CAUTION! It is recommended that a CLI startup never be done when the job server toggle is enabled. If a CLI startup starts the job server without the GUI, it will begin cycling indefinitely and require a computer restart. CAUTION! The CLI should not be used in a datasource server installation. If both the datasource server and the CLI are needed, a computer should be dedicated to each. The system load sequence is:
Moving Data The generalized input format:
The first item on the line must be the executable command which may be prefaced by the path. Following that is a series of command name and value pairs. Ordinality is unenforced. Each name/value pair must be separated by a colon. Each name/value pair must be preceded by a semi-colon. Free-form spacing is permitted. Text is case insensitive. All commands are optional. Each pair must be complete. Commands:
The attended command can be problematic, so it should be observed closely. If the attended command is set to attended, and no operator is present, the server will hang until it is unloaded from the GUI. If it is set to unattended, and a faulty command puts CoreReader into a loop, there is no way to stop it without the GUI except to restart the computer. * Either a connection name must be passed, or autoconnect must be turned on. This is an intentional precaution. Autoconnect is checked only if a name is not passed. Connection names must be on file in the system. If a connection name is passed, it will be used for all of the CLI operations. It is by intent that connections cannot be created by the CLI. If a saved query name ( stored procedure ) is passed, it will be attempted. If a SQL statement is passed, it will be attempted. Query names must be on file in the system. A startup with both a query name and a raw SQL statement cannot be accepted. It is possible to hang the system from the CLI. Testing is important.
Moving Data Load the system with the GUI exposed for operator control.
Load the system and run the specified query. Do not allow the GUI to load when finished. Auto-connect is on, so a connection will automatically be made.
Make the specified connection after loading. Auto-query is on, so the specified query will automatically run.
Make the specified connection and run the specified query. Then shut down.
Make the specified connection and execute the SQL statement. Output will go to schema based XML.
_________________________
Moving Data Abstracted databases were introduced in release 20909, and this feature is still being investigated for development. This section should be ignored by all except the technically advanced. Even the technically advanced will not usually require these features. Database abstraction is not useful for most database work. Furthermore, since it attempts to map the abstraction tools into every data source ever made in the absence of standards, it maps poorly to the real world when it works at all. Of course, CoreReader's basic entity may be thought of as a virtual database, but CoreReader allows the concept to be amplified through the abstraction of conceptual databases. An abstracted database may be a subset of a database, but not necessarily. CoreReader permits a more complex construct which is constrained by each data source and by its data sockets. In certain configurations, CoreReader will accomplish operations that not only will a data source not normally do, but that we would not even want a data source to do. A canonical presentation by the data source is the foundation of our ability to work with any data source, and what may seem to be more freedom in CoreReader actually derives from the canonical stability of each data source. The work in this area seems particularly interesting as a theoretical pursuit, and any suggested extensions and enhancements would be appreciated. One of the problems to be addressed is whether the manufacturers will allow us to continue to work with such abstractions, of if self-centered forces will stop us. As noted in the previous paragraph, the CoreReader abstraction ability is not a result of any data source weaknesses, but is in fact built upon their strengths. When working with abstracted databases, it is the responsibility of the operator to account for the impact of non-standard factors such as geographical dispersion, network latency, server mismatch, time zones, etc., etc. ad infinitum. The abstraction of a database is done in each connection so that every data connection can become a virtual database instantiation. Another way to think of it is that each data connection may return either a database or an abstracted database.
Moving Data The name prefixes are a space delimited list of masks. The list limits the table type objects that will be loaded to those whose leading characters match one of the prefixes. The list is a case-sensitive exact match. Any number of characters may be used in each mask, and any number of masks may be used, but the list must not exceed three thousand characters. If the list is blank, it has no impact on the loading. If it contains a value, every table type object must match at least one of the listed mask values or it will be excluded from the database load. The masks may be used to decrease operator confusion. Also, they can increase the load speed of very large databases tremendously by excluding objects that are not needed.
Moving Data The name list is a space delimited list of name masks. The list limits the table type objects that will be loaded to those that are explicitly specified in the list. This is in addition to the limits imposed by the prefix list. If both masks are used, an object must satisfy both in order to be included. If the list is blank, it has no impact on the loading. But if it contains a value, only those table type objects that exactly match one of the listed values will be loaded. Any number of masks may be entered with any number of characters, but the list must not exceed three thousand characters The masks may be used to decrease operator confusion. Also, they can increase the load speed of very large databases tremendously by excluding objects that are not needed.
Moving Data CAUTION! This is a powerful feature that data sources and data sockets will generally not expect an external system to employ, so the results are unpredictable. The appender should be left blank unless the operator understands it. If it is blank, CoreReader ignores it. This is an internal CoreReader construct and is not part of a normal data connection. Also, this is not intended to be a security bypass. The purpose of the appender is to transcend canonical boundaries by attaching foreign objects to the current database connection. Appends are to be entered into the appender as a space delimited list. The list will be appended to the database load and queries may be run against the named objects. The foundation for this feature is usually a segmented object qualifier which has become unbiquitous through the years. CoreReader will respond to any level of segmentation complexity and accepts multiple domain segmentation which it maps to server specific requirements. Fully qualified table type object names must be used. Qualification will always be with respect to the current connection target. Functionalities of qualifiers are determined by the context. CoreReader's internal design attempted to generalize to the universal construct. Depending upon the environment, CoreReader may evaluate to a table type qualifier, view qualifier, schema target, database identifier, server cluster, etc.. It may limit domains or it may cross canonical domain boundaries that are usually considered inviolate, depending upon the environment. The project manager or database administrator must employ the appropriate qualifiers and concatenaters for each of his server brands. CoreReader will manage context switching to use them appropriately for each server. Because it is expected to be a list of unrecognized foreign entities, CoreReader will not inspect or edit entries into the appender. It is up to the operator to insure entry validity. Multiple entries may be entered with space delimiters. Any number may be entered with a total character count not to exceed 3000. The names must be fully qualified. For example, some data sources such as Ms. Sql Server require the inclusion of the dbo. (Which I tend to forget, and which almost caused me to drop the feature from CoreReader because I thought that it had stopped working. ) A MySql server, on the other hand, requires only a stright-forward object qualification. The appender can be useful for data sources that seem to load objects, but which cannot be queried. For example, an Oracle login may list foreign schema objects but not allow queries on them. Enter the fully qualified names in the appender, and Oracle will then allow them to be queried. Due to the attempt at a universal construct in the absence of standards, it may not perform as expected with all data sources. It will probably not operate at all with the simpler desktop data sources. It's primary importance is in work involving massive enterprise level databases.
Moving Data The universal qualifier is designed to either redesignate or qualify the entire database. All table type objects will each be individually prepended by the qualifier. Only one qualifier may be used. It may be of any internal level of complexity and of any length up to a thousand characters. Leading and trailing spaces will be removed. Other than that, because it resides in the universal namespace, it cannot be edited by CoreReader and the responsibilty for edits reverts entirely to the operator.
_________________________
Moving Data This section may be ignored until you want to log the system activity. The system contains an activity logger which, when activated, watches the system activity. It can record most activity and attempts to record all errors. There are separate settings for GUI logging and for the gateway logging, but they write into the same log. Logging is off by default. To start activity logging, press the admin button to open the administration screen. Select the log frame and change the logging toggle to yes. This will produce a minimal log of operations. Errors are logged only when logging is turned on. When logging is turned on, the system attempts to trap and log all of its own system errors. Some operator errors will also be logged, but verbose logging is required to record all operator errors. (See below.) High speed server operations with logging have been tested. However, high speed is a relative term. Local testing and discretion is advised. For system development and tuning, the log provides a quick picture of relative operation timing in the first column. The second column normally contains a place holder. If it contains a flag character, the message may require immediate attention.
Moving Data To protect the computer infrastructure and not burden the operator with maintenanance, CoreReader maintains a rotating event log. After the log reaches a specified size, CoreReader empties it and continues writing to it. Log truncation is determined by the log size set in the configuration screen. The default is one million bytes which is based upon the assumption that the system is in single-user operation. The maximum setting must be at least 1000. Actual truncation of the log was considered, but CoreReader is used by many neophytes who would not understand the time required for such an operation. Entirely removing the automatic operation was also considered, but many people would find their hard disks filled. Therefore, the only realistic approach is to empty the log entirely when it reaches the specified size. Be wary of causing database thrashing. If you are running CoreReader servers or a multi-user installation that creates a thousand records a minute, and you set the maximum at a thousand, the system performance may suffer.
Moving Data Verbose logging is a detailed log of operations. When logging is turned on, it defaults to minimal entries. To record detailed operations, turn on logging and then enable verbose logging. Verbose logging is especially useful for debugging. It provides a tool for the administrator to assist neophytes with problem queries. If security is a concern, verbose logging is recommended. When a connection is made, it records all of the connection parameters except for passwords, and it records query details.
Moving Data The statistician's toggle is on the same GUI frame as the log toggle in the admin screen. Statistics are cumulative. If necessary, the values can be initialized by deleting the table data file. Values have been increased to double integers. The statistician has been altered to fit the needs of CoreReader's new high speed job servers and concurrent operations. In the past, the frenzied updating of the statistics table sometimes actually overloaded the operating system and other parts of the infrastructure. Each instance now saves its statistics internally while it runs and updates the stat table when it is shut down. The entries in the statistics table are:
_________________________
Moving Data To find the query manager, press the "query lib." button to display the query manager. If you do not want to save queries for later use, and if you do not need the auto-query feature, and if you do not need the job server, then this section may be ignored. CoreReader can be used to build a table of queries that can be re-used as needed. After a query is created, it can be saved into CoreReader's query table and retrieved when needed. (Database servers refer to saved queries as "stored procedures.") After a connection is made, a stored query can be selected and run. Notice that the database does not need to be loaded to use the query table. A few possible uses:
Moving Data First, create a query and test it. As a query is being created, CoreReader is busily translating it into SQL code in the background. Now press the query manager button to display its screen. On the query manager screen, enter a unique name for the query in the name text box. The name is required. Enter a description in the description box. A description is not required, but is recommended. It will appear when the query is loaded in the stored query frame. Press the get-sql button. The query will be copied from the main query screen into the query text box. Press the save button to save it. If the query has the same name as an existing query in the table, the existing query will be replaced by the new one.
Moving Data Select the query in the library drop-down box to retrieve it. Make any needed changes and press the save-query button. Notice that if the name of the query is changed, it will be saved as a new query.
Moving Data To remove a query from the table, select its name in the library drop-down box and then press the delete-query button.
Moving Data Every query is named before saving. Some sort of descriptive naming system should be used that makes sense to the operator, but the names should be as short as possible. Queries can be run only from the main query form, but they can be loaded in two ways. The stored query frame can be displayed on the main form. It loads all query names that are on file, and one can be loaded by selecting it. Also, the query table form can be displayed, a name can be selected, and the put-sql button pressed to copy the query into the main query form. After loading the query, press the query button to tell CoreReader to run it. For the manager: When each query is saved into the table, it is saved with a shared code of "no". Before saving a query, its shared code can be set to "yes" which will allow others in the department to load and run the query. A shared query can be changed only by its owner. CoreReader maintains the table in an operator-accessable text file on disk, but hand editing is not recommended. If the file is rendered inoperable, it can be deleted, and CoreReader will initialize a new one. Caution ! Database backups are recommended.
_________________________
Moving Data This section may be ignored when starting. However, it can be helpful reading when you have the time. CoreReader's objective of querying every data source ever made required a standardized data type handler. Since every data source handles data uniquely, some compromises are made. To universally accomodate data source software, all data is treated as either alpha or numeric.
Moving Data CoreReader's default behaviour is to treat an entirely numeric value as a number. For some high end servers, even this is too much data typing and it is better to allow the server to handle it. In that case, CoreReader's numeric data typing can be turned off to allow the data source to handle all data typing. This is covered in the configuration section. For genuine database servers, it might be a good idea to disable numeric data typing in the configuration screen. However, desktop database managers such as Ms. Access, may need to be told what type of data they have in each column, so numeric data typing should be left enabled for them.
Moving Data BLOB's are not data. Normally, when CoreReader encounters a BLOB, he takes steps to protect the system from it, and then ignores it. However, a provision is made for them since a database may store them. Every BLOB type requires its own specialized software handler, so if their display is wanted, CoreReader must be told to pass BLOB's to their software handler. The BLOB handler must be able to accept the BLOB or its address as a startup command parameter. To tell CoreReader to hand BLOB's to a handler, press the configuration button to open the configuration screen. On it, set the blob toggle to yes, and save the configuration. Enter the name of the handler program in the blob handler field. Some computer systems may require the full path of the handler with the name. A database may use pointers or may embed, so CoreReader must be told which method is used in the database by selecting one of the locator methods in the configuration. If "pointed" is selected, CoreReader will pass the required address to the BLOB handler. If embedded is selected, CoreReader will retrieve the BLOB and hand it to the handler. Since only one blob can safely be displayed automatically, text output will display the blob in the last record. CoreReader will look for the blob in the last column of the last record. If anything else is in that column, the results will be unpredictable. After the record is retrieved, the blob will be displayed. When blob display is enabled, CoreReadeer enables the blob button on the data browser. Pressing the blob button tells CoreReader to hand the BLOB specified in the current grid square to the specified BLOB handler. It is the responsibility of the operator to insure that the cursor is in the correct column when the blob button is pushed. An invalid column may produce unpredictable results. ( Since BLOB's are seldom stored within a database, only the pointer currently functions. The completion of the embedded locator function has a low priority, and may not be completed until somebody indicates that they actually need it. )
Moving Data If the text is empty for a -where- operation, the system will assume that only records which have nothing in the column are sought. The difference between null and nothing is handled differently by various data sources, so the same query can produce different results from different sources.
Moving Data The handling of text case can be problematic in object names and in data variables. CoreReader addresses both areas to relieve the operator of those concerns. Operating systems and servers create confusion through the imposition of varying case handling requirements for object names. To insure compliance with the various naming requirements, regardless of the complexity of the environment, simply allow CoreReader to control the names of all objects in the queries. CoreReader tries to make variables case INsensitive to simplify query requirements.
Moving Data Each data source handles dates differently, so if it cannot handle a date, try a different format. CoreReader's query generator will attempt to use any format that is given to it. Some low end data sources have a problem with date delimiters. If that problem is encountered, the operator must handle it manually.
__________________________________________________ Chapter Job Manager and Job Server __________________________________________________
_________________________ A job is a group of operations and parameters that were grouped together by the operator to perform a specific function. The job might, for example, be the update of a web page every hour. Or perhaps it is the preparation of a report at three in the morning so it will be ready for the boss when he gets to work. Or maybe the update of systems via an XML delivery. The main components of a job:
Each job has a run schedule which is stored with it. It might be run every hour, every five days, at two in the morning, on the first of each month, etc. etc. When its time arrives, it executes and then waits for its next run time. Note that some types of jobs use a connection and the query which are named in the job. Those job components must be previously created and saved so that the job can reference them by name. A connection or query can be used in multiple jobs.
_________________________ Suggestion: In a multi-user installation which will also be running the job server, it is frequently a good idea to have the network administrators create an ID for the job server. That identity can be used to log onto the job server computer so that log entries and problems can be better identified. Every time that one wants to run a job, one could load it and press a button. Instead, we can tell CoreReader to do it for us while we attend to other matters. The Job Server automatically runs jobs. When it is started, it reads all of the applicable jobs that are on file and begins running them according to the schedule of each. It will continue running jobs day and night until it is told to stop. The Job server is an automatic process which does the actual job runs. It runs in the background while watching the job schedules. When each job schedule becomes current, the server runs that job. We might take issue with calling it a server because it runs with the GUI loaded. Most servers do not do that because a GUI event can hang the server. The CoreReader Gateway Server, for example, has no GUI. However, the Job Server is designed for GUI interaction to simplify its operation and control. The Job Server is not part of the Job Manager screen. It is a separate invisible object that is controlled from from the Job Manager screen. It is important to remember that the server cannot be seen. It always runs in the background. For those who are accustomed to the graphical interface, the server may require a new way of thinking. The Job Manager is that part of CoreReader which stores and retrieves jobs and kicks off the Job Server. It is accessed on the job screen of the GUI. Allowing the GUI to be available while the server runs creates a potential problem: Remember that many programmers and their managers are not very bright and build poorly designed systems. When a GUI is displayed, it allows poorly designed data sockets and other systems to display messages on-screen. When that happens, it hangs the system until an operator clears the message. ( Note that the Job Server and the CoreReader Gateway Server are entirely different sub-systems. The CoreReader Gateway Server is a hidden interface which can be seen and operated only by other systems. NEVER start the Job Server when the CoreReader Gateway Server is being used. )
_________________________
Job Manager And Job Server Running multiple servers in an installation is not recommended. The problem lies in two areas: The inability of the infrastructure to reliably handle the load that CoreReader can put on it, and CoreReader's internal data tables. This problem includes all servers: The job servers and the gateway servers. CoreReader's embedded database manager has been tested up to a hundred thousand concurrent record inserts per hour and has so far performed reliably. But CoreReader's job server can overwhelm itself when it is competing with other job servers. The job servers can begin lagging behind as they queue to read and write in their database. As that lag increases, it can eventually run past the times to start the next job runs. The CoreReader server is capable of loading the infrastructure far beyond its abilities because the server drives not only itself, but also drives external systems at its high speed. Although unusual outside of testing, it must be assumed that a heavily loaded server can overwhelm the local computer, network components, and remote database servers. This has been observed in CoreReader's test environment.
Job Manager And Job Server It is possible to run multiple servers if the administrator is intimately familiar with CoreReader, the jobs, and the environment inside the systems and monitors daily operations. (This discussion should be considered applicable to the gateway server as well, and to any combination of the two server types.) Run each server on its own computer. Multiple servers should never be run on a single computer. The ability to run multiple instances on a single computer has been included in CoreReader since all needs cannot be foreseen, but it is not recommended. Local servers on the same computer will compete for the computer's resources and external systems, such as providers, will be unable to handle the demand with consequent disruption. Contrary to intuition, servers sharing a computer at high speed sometimes even become synchronized which escalates the problem. Contrary to intuition, jobs that run simultaneously or even close to each other should not be run by separate servers, but should be run by the same server because he is designed to handle such problems. When simultaneous or closely scheduled jobs are run by separate servers, testing has found that they will interfere with each other at random intervals. Jobs that use the same data source should not be scheduled at or near the same time regardless of the server on which they run. Although it runs remotely and through data sockets, a CoreReader server can overwhelm a database manager. The problem is sometimes a slow degradation at the database manager end until it becomes unuseable. Again contrary to intuition, it is not always a good idea to give the servers a high speed network unless massive amounts of data are being moved. A slower network tends to assist by slowing each server so that other servers and processes can share the network. As rediculous as it may sound, it may sometimes even be best to have a network of under a hundred meg/sec for job servers. Multiple servers should be run on computers that are of approximately the same speed and the database should be on a fast disk in a fast computer. When a server is much slower than the others, it has been observed to have trouble getting into its database because the faster servers sometimes create a cycle wall around the database. The administrator should monitor the system health by checking the log each day for problems. This is critical in a production environment. That can be done quickly, perhaps each morning, by loading it into a text processor to search for errors and then quickly scrolling through it.
_________________________ The CoreReader Job Server must load the jobs into memory so that he can cycle at a high rate. He must be able to rapidly load, unload, and reload data sockets which were built by companies that do not mind spending your money on hardware, so those objects are sometimes massive. He must be able to retrieve large data sets from the data sources and then he must manipulate those data sets in RAM to conform to the needed output. A summation of those needs is simply that the CoreReader Job Server's needs are similar to those of a database manager. They dictate that a CoreReader Job Server must have as much high-speed RAM as possible. He is designed to minimize disk impact, so he will run with the same disk that is used for his maintenance, assuming of course, that the disk can support the local job requirements.
_________________________
Job Manager And Job Server Press the jobs button. The Job Manager screen will be displayed. The Job Manager can create new jobs and edit existing jobs. When told to save a job, it checks parameters and notifies the operator of any problems that it can identify. In a multi-user installation, an operator can edit only his own job. Connections and queries that are shared may be used in all jobs by all operators. A job name must first be entered or selected. A new name can be typed into the name box to create a new job, or one can be selected from the drop-down box for editing. Max name length is 50 characters. To edit an existing job, click on the job name drop-down box. It will load the job names from the database. Selecting one of them will cause the name to be copied into the name box. That will also cause all of that job's parameters to be loaded into the appropriate areas. An existing job can be used to help create a new one. Select the job to fill the appropriate areas with its data. Then change the name to a new one. Other items may also be changed. The job can now be saved as a new one. After a valid name is entered, select the components panel. Ownership of a job is controlled by two factors; the owner name and the shared parameter. The shared parameter controls viewing and running. Editing is controlled by the owner's name. To delete a job, select it or enter its name. Then press the delete key. A delete is forever. Regular backups are recommended. To test a job, see the Execution section of the documentation.
Job Manager And Job Server A job type must be selected. The types are listed on the name panel. To assist with the complexities of data entry, the selection of a type will clear and disable entry fields that do not apply to that type. There are four types of jobs. They are listed on screen as:
If you need to inspect the job table, you will find them saved as:
Job Manager And Job Server Notes Each job may have a free-form description or notes containing up to 200 characters.
Shared The shared parameter defaults to yes which allows others in a multi-user environment to run the job. A job can be edited only by the owner regardless of this setting. If the parameter is set to no, only the owner can run it. Sharing a job accomplishes multiple purposes.
Enabled There is an enabled parameter which defaults to yes. A job may be disabled by setting that parameter to no. If a job is disabled, the server will not run it. This permits saving work during development without fear of it being run. The test utility will run the job whether or not it is enabled. Disable On Error The server's default behaviour is to to protect the infrastructure, so he normally disables a job when it fails. That is only a temporary disabled state while the current job queue is loaded. When the server is restarted, that job will again be queued and run. If the "disable on error" parameter is changed to no, the failed job will not be disabled and will be re-attempted on each of its scheduled runs. (This should not be confused with the "abort on event miss" or the "stop on error" parameters.) Assigned Server The execution of multiple job servers produces a complex environment which may be difficult to manage. This parameter is designed to lessen some of the problems inherent in managing multiple servers. When multiple departmental job servers are run, every server will run every job which may produce duplications of jobs. This parameter can assign each job to a specific server. Each server will load and queue only those jobs that are assigned to it. Each job can be given an assigned-server when it is created and saved. The assigned-server can be a computer, logon, or an arbitrary runtime ID. If a runtime ID is used, it may be any number, name, or string of characters up to twenty characters. It is not case sensitive. The default string of "any server" is the same as a blank entry. When starting the server, a runtime name may be entered into the text box on the start screen. The way that the server uses this feature is determined by the presence or absence of any entry in that box. If NO runtime name is assigned to the server when it starts, then it will run all jobs which:
If a runtime name is assigned to the server, then it will run all jobs which:
If the runtime name that is assigned to the server is an asterisk, then it will run all jobs which:
To lessen system stress, it is usually preferable to run internal housekeeping utilities on the machine where the database resides. Spinlock Each job has its own spinlock value. See the discussion in the section of the System Configuration chapter. A spinlock failure may not disable a job. For example, an insert job knows that the file is present before trying to open it, so if it cannot open the file within the spinlock, it will simply quit trying until the next cycle time. Buffer Time This setting is updated on the system configuration screen because its value applies to all servers. The default value is 10 seconds and the value cannot be set below that value. This setting is critical to the job server. In most cases, it should not be changed, but its importance is such that it should be considered and frequently reconsidered by the installation administrator. The optimum value is dependent upon local conditions, hardware, environment, and jobs. The job server must continually make many decisions as he runs. Whether or not to run the next job is itself a complex question. If the job is scheduled to run at a time, and the time is now twenty four hundredths of a second before that scheduled time, should the job be run? Or if the current time is five seconds after the scheduled time, what then? If the job is to run on the hour, and it was run two minutes ago, should it be bypassed? The buffer setting is used by the server to help with those decisions. He will run any job that has a scheduled time that falls within the buffer, and he will run no job that has been run within the buffer. If a job's event value falls below the buffer value, the server will disable the job until he is restarted. That is done because the job's run would otherwise be unpredictable, and the server calls attention to the problem by disabling it. The disabling action is logged. It is important for the administrator to remember that the buffer value is seconds. Although the default setting should be assessed for local validity, it should be changed only after a great deal of thought. Abort On Miss This parameter can tell CoreReader to skip a job if it is not run on schedule. The scheduled time is plus and minus the buffer time. This is discussed more extensively in the Scheduling section. Run On Event Three special events can be monitored and used to initiate an external process. They are "run on error", "run on run", and "run no data". An example would be running a mail sender to alert personnel to an event. The "run on error" will be initiated if an error occurs in the job. Not a general system error, but a job error. The "run on run" will be initiated each time that the job starts. The administrator must recognize that this means every time that the job starts. The "run no data" is a special event that will be observed only by those jobs that manipulate data. It will be initiated if no data is found or delivered. Although a lack of data is not an error for the job, it can sometimes be catastrophic for a business. The appropriate parameter will be checked when an event occurs if there is an entry in the file name for that parameter. The complete path and name of the executable object must be entered up to two hundred characters. A parameter string may be entered for each one up to two hundred characters. If it is present, it will be correctly passed to the object on execution. An important detail is the handling of quotes. Some strings or paths should be quoted and some should not, and sometimes a parameter string requires special internal quotes. Therefore, CoreReader must avoid the issue and the use of quotes is entirely the responsibility of the administrator. For example, a space in a path without quotes will fail. CoreReader can assist with message parameters. If the character string
_________________________ An extract type of job extracts data from a database. All output from the job server goes to disk.
Job Manager And Job Server A data extraction job requires a data connection and at least one query. On the components panel are found the names of connections and queries which were read from the database into two drop-down boxes when the screen was loaded. A job requires the names of a connection and a query that are on file. Max name length is 50 characters. Enter an existing query name in the query name box or select one from the drop-down box. Up to two hundred characters may be entered. A job may have multiple query names. Multiple queries will be run sequentially in the sequence in which they are listed in the job. They must be separated, not delimited, by the vertical bar which was known as a pipe in the old Unix world.
Job Manager And Job Server Every extract job requires a file name without a suffix. When the job runs, CoreReader will assign a suffix that indicates the file type. Max name length is 20 characters. Every extract job requires an output location which is where the file will be delivered. A root directory is not permitted. CoreReader must be able to see and write to that location. It must be a complete and valid path without the file name. CoreReader will not create the location. Max path length is 100 characters. CoreReader uses RAM like a database manager to build the returns. If he runs out of memory while building a return, CoreReader will log the problem, disable the job, and move on to the next job.
Job Manager And Job Server When the fixed width text output is specified, the output is delivered into columns of a certain width. Each column of data has the same width from top to bottom. CoreReader decides upon the width for each column as he builds the output. In most cases, he can get the width by asking the data source for it. However, sometimes a data source, including the big name brands, does not know the width of the data. When that situation is encountered, CoreReader makes a broad guess based upon his knowlede of database managers. ( Not all data types in all data sources have been handled. Notify me if you encounter one that you need handled and I will get right on it. ) Since the output is manipulated and built in RAM, a fixed width return can require a very large amount of RAM. If he runs out of memory while building a return, CoreReader will log the problem, disable the job, and move on to the next. If that happens, the computer will need more RAM installed to deliver that job. The file name suffix will be txt.
Job Manager And Job Server Variable width tells CoreReader to adjust the width of each column in each record to fit the data by removing all blank space. This output type can reduce the output size tremendously. It is usually used by system imports because it is easier to program for than is fixed width. Variable width output uses a column separator between all columns to define them. The separator has a default value, but can be changed for a job in the on-screen text box. The server will accept any character that can be gotten into that text box. The server does not delimit data. The file name suffix will be txt.
Job Manager And Job Server Web page output is designed primarily for the update of web sites. Web page output generates a web page using standard HTML. CoreReader decides how to construct each page in response to the type and size of the data. The file name suffix will be htm.
Job Manager And Job Server For webmasters who need their pages "just so". This is intended to automate the update of large web sites. This option prints only the HTML table which can then be embedded in a customized web page. The file name suffix will be htm, so it appears to be a complete web page, but it is not.
Job Manager And Job Server This output is designed to feed automated systems. CoreReader will generate standard schema-based XML output. He decides on the structure of the file based upon the data that is returned from the data source. He prints the schema with the XML so that a system can read schemata as needed. This feature is provided to assist with data transfer between systems. The file name suffixes will be xsd and xml.
Job Manager And Job Server This output is designed to feed automated communication systems. CoreReader will generate standard SOAP output for transmission between systems. He designs the output on the fly based upon the data. The specification of SOAP alters CoreReader's error handling because the SOAP protocol is designed for systems. Error strings retain the CoreReader compliance and are encapsulated within an XML return which is encapsulated within the SOAP error protocol. The file name suffix will be sop.
Job Manager And Job Server Most spreadsheet programs will recognize a csv file as their own. Double clicking it will load it into a spreadsheet on most computers that have a spreadsheet program installed. The file name suffix will be csv.
Job Manager And Job Server See the Spreadsheet CSV output sub-section. The creation of Excel files requires that the server load Excel objects and then work through them. When the Excel objects misbehave, since they are built by a billion dollar corporation, my work is blamed first. I have watched that happen on job sites. We will not do that to CoreReader. Therefore, Excel files will not be created. The csv file will suffice because it can be loaded into the spreadsheet and saved as an Excel file.
Job Manager And Job Server ( Under consideration. Delivery of a format spec may depend upon somebody asking for it. These are not difficult, but simply require precious time.) The formatting capability is provided mainly for reporting needs. Not all formats are applicable to all output types. For example, coloring is irrelevant in text files, but text files can have other options such as comments. When a format is specified for an inappropriate output type, the specification is simply ignored and will not cause an error. Where multiples are permitted, such as for header lines, they should be separated by the standard separator. The CoreReader standard separator is the vertical bar which was known to the old Unix programmers as a pipe symbol, |. This feature allows multiple entries, but it also means that the separator cannot be used in the formatting.
If column headers are specified, CoreReader will put the names of the columns at the top of each column. If they are aliased in the query, he will use the alias as the header.
Header lines will print at the beginning of the output before the data. They will be centered and set apart from the data. CoreReader will allow customized HTML or XML formatting to be embedded in them.
Footer lines will print at the end of the output after the data. They will be centered and set apart from the data. CoreReader will allow customized HTML or XML formatting to be embedded in them.
Comments are freeform text. CoreReader will allow customized HTML or XML formatting to be embedded in them.
Each element that can use a bold font has its own bold setting. The default is no.
This may make sense only to webmasters. It is provided to allow the automatic update of web sites without human intervention. If the return page is provided, CoreReader will use it to create a link and will place it at the top of the page. The return page should consist only of the path, which must be complete. CoreReader will generate the needed HTML to communicate with the web browser and the web server.
The range is a specification of a dataset range which is similar to a spreadsheet range.
Coloring can be specified for most elements. Coloring is applied to a range after it is applied to the table.
_________________________
Job Manager And Job Server Because some computer operations such as file jobs appear simple to beginners, these are sometimes the first to reveal the beginner. Be careful. File jobs have double events. The first is the schedule event which executes the job. The second is the appearance of the source file. If the source file is present when the job runs, the job will continue. If the source file is not present, the job will end. The failure of a source file to appear is not an error as far as CoreReader is concerned. Its absence could be caused by many factors in the environment which are of no concern to CoreReader. If however, when the file is received, the target location is not available, that is considered a job error. If the target file name is the same as the source file name, then the target file name may be an asterisk (*) placeholder. The system will enter the name for you at run time. If there is a wildcard character in the source file name, then the target file name must be a single asterisk. If there is a wildcard character in the source file name, then CoreReader will allow the entry of an exclude file name. That file will not be included in the operation on the set of files. If multiple files are processed in a job and one file fails, that failure will not stop the job. When any type of file job executes, the server attempts an exclusive lock on the source file to insure that no other process is working in it. The job will continue if the lock succeeds. If the lock fails, the server will apply the spinlock. If there is a total lock failure, the server will raise an error and fail the job. An archive job will inspect and purge the archives after archiving the current file. Any file that is not recognized as an archive will be purged. The current file will be renamed by prepending the archive value. The archive value is the standard ragandate which allows up to 99 files to be archived per second. A single server will probably not achieve that rate because it must pause for various reasons and those pauses plus the normal cycle load will reduce the archive rate. Multiple archive jobs may be directed to the same location, but the retention of the archives will be the determined by the shortest archive value. The archives are purged after the main operation so that an error will leave the older archival copies intact. If a move job or an archive job fails to delete the source after making the copy, the copy will be retained to assist the administrator in determining the source of the problem. An error will be raised as usual. There is a possibility that the operating system may fail in the middle of a rename job. The possibility is very remote, but real, that the file may be lost. If the file cannot be replaced or reconstructed later by administrator intervention, then the job stream should be constructed accordingly.
Job Manager And Job Server A backup is a special kind of file manipulation job. A backup will attempt to copy all files and directories in and below the source location. Backups take longer than necessary because CoreReader makes frequent pauses to allow other objects on the computer to run. For large backups, the pauses can amount to seconds or even minutes, but if he did not do that, everything on the computer would hang until he completed the backup. That also protects operations on the source and target computers. A large backup tends to place a heavy load on the infrastructure. If an error occurs during the backup which does not kill the job, such as one file not being copied, the damage is done, so CoreReader will attempt to continue the job. The error will be logged. When a test is run, such errors will halt the test. Timeout: Backup jobs tend to be larger than other file jobs, so CoreReader will allows a total job time up to the amount specified by the system general timeout. If it does not complete in that time, CoreReader will abort it and proceed to the next job in the queue. If it completes in less time, the server will immediately return to servicing the job queue. The default value for the system general timeout is 1800 seconds; 30 minutes. It can be changed in the system administration screen. The retention parameter for a backup is required. Any items in the target location that are not dated backups will be purged, and those that exceed the retention period will be expunged. The purge will execute after the main operation so that an error during the backup will leave the older archival copies intact. A note of caution: An indefinite retention is not the same as no retention period. If the indefinite retention is specified, then the backups will never expire and none will ever be removed. That can overwhelm a storage site if the site is not supervised and managed. Backups are renamed by prepending the archive value which is a standard ragandate value, so it is theoretically possible to store up to one hundred backups per second in the same location. Take caution when directing multiple backups to the same location. The location will be cleaned of all objects that do not have an archive value and expired backups will be purged even if they came from other jobs. Therefore, jobs that share a target location should be of the same retention type and value. Directory Limit: Backup jobs are limited to 10,000 sub-directories to protect the computer. A job error will be returned if that is exceeded. Timeout: If the compilation of the directory structure exceeds ten minutes, then CoreReader must suspect that there is a problem, so he will abort the process. A job error will be returned to let the administrator know that the backup was not completed. Statistics for each backup are entered into the log.
_________________________
Job Manager And Job Server The event structure of an insert is usually different from other job types. An insert depends upon the availability of the specified data file. The availability of that file usually depends upon its delivery by an external app. Since CoreReader has no control over that app, the lack of that file is usually not considered an error. The job will cycle as specified to look for the file on each cycle. If it is there when the cycle begins, it will be processed. If it is not there, the server will move to the next job in the queue. (If you are familiar with AxleBase, you should note that this is not an AxleBase import. The job is handled by CoreReader mechanisms and by the target database manager.) A CoreReader insert is not a mass movement, but is done one record at a time. This is slightly slower than some other systems, but it allows greater control, especially of errors. If a record is erroneous, the process can continue in a controlled manner and the bad record can be logged.
Job Manager And Job Server The target of an insert is where the data will go. The target requires the name of a data connection. That connection will be used to connect to the database. A name can be selected from the dropdown list or a name can be entered. The target requires a table name in the database. The table must exist before the job runs. An existing connection name must be selected from the dropdown list. That is the connection which the job will use to connect to the the target database.
Job Manager And Job Server The data source is the name of a data file and its location. The location will be checked when saved. The file name must be the complete name of the file including the suffix. The existance of the file will not be checked because the file may be delivered to the target location at a later time. Each record in the data file must be terminated by the standard Ms. Windows line breaks. Records must not be delimited. Columns in the data source file cannot be delimited. The data separator is required and cannot be a space. The best practice is to use a single character, but CoreReader will accept multiple character separators up to ten characters long. The data source must have separated columns which are separated by the specified separator. The job must map the source into the target. The mapping tells CoreReader where to put data and gives flexibility to the job. Mapping is done by entering the information in the insert map frame. Enter the total number of columns that will be found in the source regardless of whether or not all will be inserted. That number need not match the number of columns in the target table, but must be the total number that will be found in the source file. When the number of columns is entered, press the refresh button to reveal the other map controls. You are now prepared to tell the system which columns to select from the text file, some of their characteristics, and into what columns they will be inserted. Remember that the numbers are the numbers of the columns in the source file. You will tell the system into which table column to insert each of them. You need not select all columns for insertion. They need not be selected in order. You may repeat selections to insert them into multiple columns. The refresh button will cause the column number list to be written into the dropdown list of column numbers. Select the column number. Beside it, select its data type. Enter the name of the table column into which it will be inserted. The column width is not required. It is the width of the column in the source text file and is needed only for fixed width text files. After making those entries/selections, press the enter button. The values will be saved in the review list. The data type of every insert column must be specified so that CoreReader will know how to construct the inserts. However, it has been decided that the best way to handle the mountain of invalid data in the world is for CoreReader to shun any responsibility for trying to validate it. If a column is specified for insert, but the inbound data is blank, CoreReader will attempt to insert a null. Many developers like to include header lines in their data files for debugging when the files are being produced by unattended apps. The job can be told to skip header lines by entering a value in "lines to skip". The system will skip that number of lines at the beginning of every text file before beginning the data processing.
Job Manager And Job Server When the job is tested, the characteristics are changed. Any error will halt the test so the error can be displayed for immediate attention. Check For Stop As discussed elsewhere, the job server can be stopped by pressing the stop button. There are two drawbacks to this ability. First, it causes a slight slowing of the process because it checks for a shutdown order after each insert. Second, if a shutdown is given in the middle of a million record insert, the remaining records will not be inserted. Setting the "check for stop" to no will tell the server to check for a shutdown order only after he is finished with the current job. If it is yes, he will check after every record. In either case, he will check before and after he begins reading records. Stop On Record Error If a source record is erroneous and cannot be inserted, the system will log the error and continue processing the file. If the "stop on error" parameter is changed to yes, then a record error will cause the job to terminate when a bad record is encountered. Log Errors The "log errors" parameter refers to individual record errors and defaults to no. If it is yes and the insert is ten million bad records, the log may fill. If it is no, then all bad records will be lost. If either the disable or the abort is set to yes, then it might be a good idea to log every error so the reason for the errors can be ascertained. Production jobs which routinely receive good data should be allowed to log bad records. Delete File After an insert completes, the remaining source file can become a problem. If a new file does not over-write it before the next cycle, it will again be inserted into the database. The parameter insures by default that the file will be deleted after the insert job completes or abends. However, the operator may want to run a subsequent job to archive the file or to move it to another process. Setting this parameter to no will retain the file for subsequent processing. If there is an error in the delete, which may leave the file intact, should the job be disabled? The operator must analyze his jobs for event combinations such as this.
_________________________ Internal utilities are CoreReader's internal housekeeping jobs. One of the utilities must be selected from the list on the external panel. Utilities may safely be run in unattended mode with other jobs. The server will insure that they are isolated from other jobs when they run. CAUTION: The utilities put an intense load on the database during execution. They should not be run simultaneously with other operations. For example, they should not be run if another job server is running concurrently, or if the CoreReader Gateway Server is running, or if people are working in the GUI. The AxleBase documentation recommends that the host app, which is CoreReader in this case, should lock the database before running purges and backups. CoreReader does not lock the database because that would lock out people who might be trying to work in it. It is therefore the responsibility of the administrator to insure that a server does not run a utility when another process or person is working. It is possible to corrupt the database.
_________________________ The external program type, or kicker, will run any kind of external executable. That can be a file with an exe extension, a batch file, or anything else that the operating system can execute. If the external program type is selected, the name must be entered on the external panel. The name must be complete including the suffix. The complete path to the file must also be entered. The parameters box allows parameters to be passed to the external job. Any string of characters may be passed to the external job except apostrophes and leading and trailing spaces will be trimmed off. If parameters are passed, the file's suffix will be dropped out of the way during execution. Maximum number of characters are:
WARNING! Running external programs always presents problems. The fact that the program was built by a big brand name or that it cost a lot of money does not insure its reliability. Having your unattended server run an external program can crash your server or your computer and can corrupt other jobs in the queue. When an external program is run, CoreReader attempts to regain focus so that he can continue to service his job queue. If the external program is able to trap the focus, your job server will stop until (and if) the other program finishes and returns it. The external job must be able to unload itself and clean up after itself when it completes its task. CoreReader attempts to disassociate himself from external jobs to protect himself. If he were to supervise the job's execution, its bugs could more easily corrupt his operation. When he kicks off a job, he tries to immediately drop it from his grasp, which precludes his reports on the progress or execution of external jobs. CoreReader can sometimes note a job failure if it happens immediately upon execution. Which brings up stored procedures. Stored procedures are jobs that are saved in databases. CoreReader can be used to run them. Do not do that. Database managers are notoriously buggy and their bugs almost always appear to be the fault of the system that is using them. This is not hearsay, but is based upon many years of painful personal experience.
_________________________ processes a text file one record at a time. will pass each record to an external process or to a query or to both. each record is
external process - values are passed as a single bar separated string. to use in a query, enter the string _record_ or _column1_, _column2_, etc.
_________________________
Job Manager And Job Server (Also, see the The Job Queue section.) Each job must have a schedule so the server will know when to run it. The schedule is entered on the job screen and is saved as part of the job. Setting up a schedule may seem simple until all the various combinations are encountered and examined. The job manager provides some assistance while the job is being created, but the task can still be confusing unless the subject is first understood. Do not schedule a job to repeat within several seconds. As far fetched as that may sound, somebody may envision a need. However, runs will be missed on such a tight schedule because CoreReader brackets each schedule within several seconds to avoid duplication executions. Contrary to the silly illusion that has been created by this digital age, it is impossible to do anything on time. Impossible. The best that we and our systems can do is to set an error span for a timed event. CoreReader's error span is set at two whole seconds, as opposed to fractional seconds. If the job event is the arrival of a time, then CoreReader will run the event within a time span of that time plus zero to two seconds. So what happens when a job's schedule is missed? Suppose, for example, that the operator has done everything possible to schedule his jobs with intelligence, but the database manager hangs for five minutes on a job, which causes the schedule to be missed for the next job. No problem. The job server will notice the situation and compensate by running the next job anyway. And what happens if something goes haywire and the system is hung past all job events ? For example, database servers have been observed locking into CoreReader for hours when they had problems. Again, no problem. CoreReader's job server will notice the problem. As soon as he can resume normal activities, he will begin cycling the jobs as specified. In that case, some jobs may be run immediately since they will be overdue, but activity will otherwise be normal. However, several possible problems remain. If several events are skipped for a job, then obviously, only the last one will be run. If a time event is missed outside of a cycle, then it will be skipped. Such a thing might happen, for example, if something were to hang the operating system just as a cycle ended. There is no way to check for such a miss if it happens, but it should be extremely rare if it happens at all. The "abort on miss" parameter provides a way of telling CoreReader to abort a job if its scheduled run time is missed. If that job does not run at its specified time, it will be skipped until its next scheduled run. The specified time is the scheduled time plus and minus the buffer time. Note that turning this on for a job in a large job queue could cause the job to be habitually skipped. (An event miss is not an error and this should not be confused with the "disable on error" parameter.) Finally, with all of those controls, if the schedule density exceeds the limits of the infrastructure, job runs will be missed. It is possible for the density to be so great that some jobs can never run. When a server is being tested for production, a daily inspection of the log is recommended to insure that that situation has not arisen. Beginners like to schedule a job to cycle every few seconds. Unless it is necessary, that is a waste of resources, and if there is a large queue, it can put an inordinate load on the infrastructure and cause missed events. Thought should be given to job scheduling.
Job Manager And Job Server A job is run by the server when a cycle event occurs. An event type must be selected for each job. Two selections are available: time and interval. An important difference between the operation of the two types is sometimes overlooked, so it is made explicit here. The interval type will run on load and thereafter, but the time type will not run on load because specific times are specified for the job to run.
Job Manager And Job Server If the event type selected is interval, then the interval selection box will be displayed. It will list second, minute, hour, and day. One of those must be selected. A quantity box will also be displayed where the quantity must be entered. The quantity is the number of interval events that will be allowed to pass before the job is again run. The job will be run on load and then will be run each time that the specified number of events passes. For example, if the event is hour, and the quantity is thirty, then the job will run every thirty hours.
Job Manager And Job Server If the event type is time, then time box will be displayed. Enter a time and the job will run at that time every day. ( U.S. military time is always the recommended format because it reduces confusion. I require it on the job site. ) Multiple times may be entered in the time box if needed. If multiple times are entered, they must be separated, not delimited, by the vertical bar, sometimes called a pipe by old Unix programmers. For example:
Job Manager And Job Server A date may be specified for the job by entering a valid date in the date box. If a date is entered, the job will run only on the specified date and never again. Multiple dates may be entered by using the vertical bar to separate them. Up to 200 characters may be entered. To run a job every day, do not specify a day or a date.
Job Manager And Job Server A day is any weekday or any month day. A day may be specified by entering a day of the week or a day of the month in the day box. If a day of the week or a day of the month is selected, the job will run every time that that day arrives. The day may be a week day or a month day. There are examples in the drop-down box on the screen. Week days and month days may be mixed and in any sequence in the day box. Separate them with vertical bars. For example: If a day or date is specified, or a multiple thereof, then the job will execute at the specified time, or times, or at the specified intervals only on the specified day or date. To run a job every day, do not specify a day or a date.
Job Manager And Job Server Jobs do not need to be entered and saved in any sequence. CoreReader checks the schedules when he builds the job queue and makes all of the required decisions. When the server starts, he constructs the job queue and writes it into the log, so it is immediately available for inspection. ( The following re-sequencing feature is currently under construction.) The job sequence in the queue can be specified if desired by resetting the sequence number to any whole positive number from 1 to 10,000. The use of this feature is not recommended. Job scheduling appears to be simple, and that appearance can lead to many errors. It is usually better to just schedule jobs as needed and then allow CoreReader to construct the appropriate queue. Sequence numbers may not be duplicated. If a job's requested queue position duplicates that of another job, the job will be disabled for the duration of the run. The number of sequenced jobs is limited to 10,000. All others must run after them. When jobs are manually sequenced, the sequenced jobs are queued in front of the unsequenced jobs. Note that the manual sequencing does not alter the job schedules. To correctly sequence jobs, then, their schedules and abort-on-miss parameters probably should be evaluated. (If you are a computer science graduate, stop worrying about whether it's actually a queue or a stack, and just do it like normal people.)
Job Manager And Job Server Scheduling is so complex that a note on testing may be appropriate at this point. He has been run twenty-four seven with a test set of over a thousand jobs which were executing nearly continually. They included a mixed set of different kinds of jobs with various schedules, connections to various server brands, various outputs, and various queries. HOWEVER, it is probable that a fatal combination has been overlooked, and since this is an unattended server, jobs should always be tested before moving them into production.
_________________________
Job Manager And Job Server A test will run only if the job is loaded into the Job Manager screen. A job which is under construction can be tested without saving it. An existing job can be tested by first loading it into the job manager. A test will run only once and then stop. It will not cycle, regardless of its schedule. In the job manager screen, select the "run or test" panel. The test button will be seen on that panel. Press it to test the job. The job will be tested by the tester and not by the server, so it will run only once. Warning: Do not test if a CoreReader Gateway Server is running on the computer. The job's parameters will be checked by the system before running it. You will be notified of any problems found before the job runs. A test should not be interrupted after it begins the run. An interruption will probably degrade or destabilize the operating system until it is rebooted. A successful job test will deliver the specified output to disk.
Job Manager And Job Server Caution ! Running the Job Server is easy to do, and if the parameters are understood, creating and running jobs is simple. That simplicity hides the power of the tool. Inappropriate use of this tool can destabilize the local computer, the network, and remote systems. Running the Job Server will initialize CoreReader. Connections will be closed, queries will unload, other screens will close, and other actions will be taken to avoid confusion and data corruption. To run the Job Server, select the "run or test" panel. On that panel, there are two run buttons. Be sure to push the one that runs only your own jobs. The server will begin cycling all of your jobs that are enabled. After the button is pressed, the server will begin running jobs. It will continue running in the background around the clock until it is stopped. Job validation is minimal in the server. Job validation was done in the Job Manager screen when the job was saved. It is therefore important that the database integrity be protected. Manual changes to the database are intentionally made possible, but should be avoided. The server loads all jobs into RAM when he starts. That makes it safe for the department to continue working on its jobs. But remember that a job contains only the names of connections and queries, and they must be looked up when the job runs. If an error is created in one of them, the next job that uses it will return an error, and the server will disable that job. The server does not load new jobs while running. To add a new job, stop the server, construct, test, and save the job, and restart the server. He will construct a new job queue which will include the new job. When the server starts, jobs that have an event type of interval will be run immediately and then will begin the specified cycling. If a job has a time, then it is assumed that the job should run only at the specified time, so it will not will be run immediately and will run when the specified time arrives. For jobs with a day or a date specified, the run will not take place until that day. The server will wait until that day arrives and then will check the event type for the job. Using the local GUI while the job server is running is not recommended. The operator and the server will conflict with each other and will contend for resources. When possible, it is recommended that the job server be run on a dedicated machine in a multi-user installation. Warning: Read the Stopping The Job Server section before running the server. Warning: Do not run the Job Server on the same computer on which a CoreReader Gateway Server is running. They will interfere with each other if run simultaneously on the same computer.
Job Manager And Job Server Stopping most servers is straight-forward. You tell them to stop and they then handle their own affairs. Building a server is usually a pleasant and interesting task for a professional developer. But this server has been tied to a GUI (graphical user interface) and when a GUI is tied to a server, everything changes. The event structure within a GUI, between it and the server, and between the two and the operating system reaches a high level of complexity that can exceed the ability of software control. Normally, the operator is not bothered with internal CoreReader matters, but this is pointed out to let you know that stopping the job server is problematic and bears watching. When stopping the job server, be deliberate, be patient, and watch for anomalous events. Warning:
The safe and correct way to stop the job server is to press the stop button. When the button is pressed, CoreReader will tell you that he is trying to stop the server and will ask you to wait. When the job stops, other CoreReader functions may be resumed. If he is running a job when the button is pressed, he will work to a stopping point before stopping. The job will not necessarilly be completed, so the results should be checked if the output was needed. A log check is recommended. Note that a job which takes hours to run may take hours to stop. The delay will depend upon the structure and characteristics of that job. If the server does not stop within sixty seconds or within the time required for a job, an unmanageable problem may have transpired. It may then be necessary to restart the computer. If the server is stopped abnormally or if an error occurs in a stop, it may be necessary to unload CoreReader and restart the computer to clear the problem. When CoreReader is restarted after the problem, files should be loaded and checked for integrity. Warning: The stop button is designed to protect your data, your database, and the stability of your computer. If any method other than the stop button is used to kill the server, including those provided by the operating system, it may degrade or destabilize the operating system and may require a computer restart.
Job Manager And Job Server Before reading this section, be sure to read the sub sections Starting The Job Server and Stopping The Job Server. The button that will run the departmental server is located on the "test or run" panel. The departmental server will run only shared jobs. It will run all shared jobs that it finds on file. Note that these qualifications include the operator's jobs. If an operator's job is not shared, the departmental server will not run it. A running job server can be a tremendous drain on a workstation. However, even a slow workstation can service many small jobs. Therefore, it is common to dedicate an unused computer to a job server. The server can then be allowed to run day and night to service the department's job requirements. A job may require hours to run. When such a job is created in a department it will interfere with all other jobs. A more powerful computer may be the solution. If there is not enough computing power available, the job should be removed from the departmental runs and run on its own dedicated computer. If a new job is saved in the department, the server must be stopped and restarted to load it. When a departmental server is used, output targets should be carefully checked. The departmental server must be able to see the target location for every job. If a location is used for output by multiple jobs, the administrator should insure a naming convention for the department so that jobs will not over-write each other.
Job Manager And Job Server What follows are guidelines, although some of them are rather emphatic. Concurent execution of any kind of server requires thought, testing, and planning by the administrator. Although not always explicit, the following observations on concurrency distinguish between processes, process types, computers, and databases. Multiple Job Server instances should never be run on the same computer. The Job Server should never be run on a computer that is running a Gateway Server. When the Job Server starts, he attempts to disable the GUI so that an operator cannot corrupt the operation. It may be possible to circumvent that safety. Do not allow a CoreReader GUI to be operated on a computer on which the Job Server is running. The internal database manager controls concurrency, so the Job Server can usually use the same database that is being used by other processes. The exception might be for a Gateway Server, especially if the activity is intense. The manner in which the Job Server loads protects his current jobs, so people in the department can continue to work even on their jobs. However, he does not load connections and queries when he starts, so care should be exercised when working on connections and queries that have been put into production.
Job Manager And Job Server The system configuration has a parameter which can load the job server automatically when CoreReader is loaded. This can be useful when the system must be restarted by non-technical or unskilled personnel. A drawback to using this feature is the GUI interaction. As discussed elsewhere, the CoreReader job server is intentionally designed with a GUI which presents many potential problems and points of failure for a server. This feature should not be used until the local server setup and all of its jobs are debugged and declared to be in production status. Turning on this feature can produce an environmental interaction with some job configurations, so it should be adequately tested.
_________________________ After the server has finished loading and checking the jobs and before he begins cycling, he builds a job queue and immediately writes the queue into the log. The log makes the job stream immediately available for inspection. It is important to remember that the queue is loaded and fixed when the server starts. He will load a new job only if he is restarted. The queue sequence may be the order in which the jobs were red. But this is not necessarily so because the server may apply logic to job characteristics that alters the sequence. The queue is the order in which the server cycles the jobs. As he reads the queue, he inspects each job schedule in sequence to see if the job should be run. If not, he goes to the next job. Therefore, if two jobs are scheduled identically and are in the queue, the first one in the queue will probably run first. "Probably", because there are many factors that must be analyzed each time to understand the server's actions. At times, it can be very hard to understand an operating server. When the server is handling many jobs, the queue can help the human understand why the system is behaving as it does. ( There is a queue manual over-ride in the system for each job, but it is not yet functional. Maybe in the future or if somebody needs it.)
_________________________ The Job Server is designed to run day and night as an unattended system, so logging should be turned on and verbose logging should be enabled when the server is run. Log entries for these operations can be confusing if one is not prepared, so this is only a note of forewarning. It should be remembered that several systems write to the log when the Job Server is running and their entries will be mixed. On the GUI side, the Job Server will be writing to the log, and the Gateway Server will be writing to the log. All entries will concern the Job Server or its supporting operations. Logging is voluminous, but has been tested on a thousand-job queue running around the clock. The log size should be adjusted to allow the system administrator to check the log from time to time before it truncates.
_________________________ Because the job server needs to run day and night as an unattended system, the operation of its error handler must be different from that of the GUI. If an error message appears on the screen, the server will be stopped (hung) until a person manually clears it. Therefore, errors encountered by the job server are logged but normally do not display messages. Logging must be turned on and verbose logging should be enabled when the server is run. It is the responsibility of the operator to check the log periodically. It may be a good idea to also turn on the statistician. Job validation is minimal in the server when they are loaded. Validation is done when the job was saved. When an error is encountered within a job, the server disables the job to protect the rest of the job queue. He will not attempt to run that job again until he is reloaded. Having developed systems within one of the world's largest and most professionally administered systems, I can declare that the error rate in any infrastructure is far greater than anybody suspects. Most errors are not even noticed by administrators because they are fleeting and ephemeral, but they are able to disrupt cycling systems. Because they are so highly respected and are surrounded by highly paid administrators, the big name database managers are usually the last to be looked at when a bizarre error hits. However, the experienced developer looks toward them first. CoreReader tries to compensate for external errors by insulating himself from them. There are a few, however, such as the crash of a database server, that can bring him down. Also, if he is geographically dispersed for a multi-user installation, he cannot compensate for the failure of the involved extended components. His log will sometimes contain evidence of what caused an unhandled failure. CoreReader will try to work through connection errors and query errors. The server will log those errors and attempt to continue running to service the job queue. Again, the operator should check the log periodically. Disk i/o load and the load on the CoreReader database should be monitored if multiple job servers are running in a multi-user environment. Normally, the impact will not be noticed, but the operator should watch the system as more jobs and servers are brought on line. Any problems will usually be seen first in the log as anomolous entries. If there is an error within the server himself, he will try to log the problem before he goes down. ( When working on a job site, I normally analyze all logs first thing every day for the past twenty four hours.) ( Developers will immediately see that the system could be controlled better if the server were moved into an external system that is controlled from the GUI. But such sophistication must be reserved for those who can pay for the immense development time that such systems require.)
__________________________________________________ Chapter Database Mover __________________________________________________
_________________________ (Under construction.) This one is only because I need it, but I've shared it just in case you need it. I have many server brands running with countless databases in all of them. I grew tired of manually moving databases every time that a new database manager was brought on line. The database mover does not actually move a database. It copies a database. For security, it is not allowed to delete the old database. A connection to the source and a connection to the target are required. This was done to insure that both databases have been created. I did not want the hassle of messing around with security on umpteen million brands of servers. The target database must exist and preferably be empty. Existing objects will be overwritten by inbound objects. All security must be prepared for the move.
|
Technology and web site
|
||
Web site is maintained with Notepad.
|