Select A Section home public service database mgr. data access data modeler site notes |
Currently In This Section CoreModel Testing ( Please Scroll Down ) |
Pages In Section summary installation architecture testing metadata srvr user manual |
__________________________________________________
CoreModel Copyright 1996 - 2013 John E. Ragan.
__________________________________________________ This test was not dated when CoreModel development stopped and was noted as the "Latest Test". Estimated time of the test is in 2002 or 2003.
__________________________________________________
__________________________________________________
* Objective * To force CoreModel to fail through artificially high loading. It was hoped that failures under artificially high loads would reveal structural design flaws within the CoreModel architecture.
* Execution * A simulated enterprise level CoreModel installation was created. A centralized executable was run from a Linux server, the database server was on an NT computer, and the various CoreModel servers ran around the clock from other operating systems. The remote control server was started. There was relatively little load from it until the heavier loads began to saturate the database server. It read a few tables on each cycle, inserted a few log records and went back to sleep. Both of the Metadata Servers were next brought on line from separate computers. The web administrator robot created a tremendous load on the system and the dictionary server was almost as heavy. In each cycle, the web server created hundreds of pages with thousands of records. After all servers were running, the two Metadata Servers frequently fought for distributed resources, which then caused the other servers to join the fight. The 100 meg network was frequently saturated with CoreModel collisions. CoreModel contains a Stupid User Simulator whose purpose is to try to lock up everything by doing astoundingly stupid things at high load. It can be run in multiple instances as a high-speed and high-load unattended server. Two Stupid User Simulators were brought on line on separate computers. They were each configured to do a thousand iterations in each cycle, so the load of each one represented hundreds of (stupid) modelers. When the SUS server woke up, he started creating new model objects in the database until he did a thousand of them. To increase the system load and to try to hit somebody else, he pauses for one tenth of a second between models. When he finishes, he demands that they all be sent back to him, logs the return and throws it away. Then, he starts deleting one model every ten milliseconds, which of course is not permitted in the real world, but is an attempt to confuse the Metadata Servers that are using the models. As he does all of that, he logs each operation's success or failure. As the first SUS worked, the other SUS server woke up and started doing the same stupid things at high speed. All of the servers, including the SUS's were set to a one minute cycle, which produced a continuous load on the infrastructure through their contention for resources. Because of varied job patterns, their cycle interactions varied. Logging was set to verbose for every operation in the entire installation and added tremendously to the load, but its primary purpose was to log the success or failure of operations. If something had not operated correctly, an error would have shown up. Each operation received one log record regardless of the operation's size. Creating a new record was logged as one operation. But the complex operation of reading two thousand records, translating them into HTML or XML, and writing all of them into files was also logged as only a single operation although it involved thousands of operations within CoreModel. If any component operation of a logged operation had failed, the entire operation would have been logged as a failure. Therefore, a CoreModel stress test actually extends over many millions of CoreModel operations. A human operator had a CoreModel instance running daily during the test, and when the system monitors on the various computers indicated system saturation, he sporadically told it to perform a random task such as validate a model or update a record or read a few hundred thousand records.
* Fault Discussion * Infrastructure :
Survival :
If he failed to survive and resume operation after a computer crash or any other infrastructure failure, he was assigned a fault. His internal servers were required to automatically resume operations after a computer restart. Human intervention was not permitted except to restore failed hardware and operating systems. Intentional errors :
* Duration * The test was run non-stop around the clock until CoreModel logged over a million operations. A million log entries represents many millions of CoreModel operations.
* Results * Results :
Conclusions :
__________________________________________________ Computers :
Databases :
Network :
Operating Systems :
__________________________________________________ Models and configuration storage for CoreModel. The stress of enterprise-level CoreModel operations overwhelmed the Ms. Access desktop database manager several times. Therefore, the CoreModel database was moved into Oracle and other database servers for stress testing. AxleBase began service in CoreModel after formal testing ended.
__________________________________________________ The CoreModel test database contains forty-one models that are based on past modeling projects. They have fourteen hundred tables, fifteen thousand columns, and the thousands of supporting elements for those objects. |
Technology and web site
|
||
Web site is maintained with Notepad.
|