Home
Random
Recent changes
Special pages
Community portal
Preferences
About Stockhub
Disclaimers
Search
User menu
Talk
Contributions
Create account
Log in
Editing
Module:Timing/doc
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Module rating |beta<!-- Values: pre-alpha β’ alpha β’ beta β’ release β’ protected -- If a rating not needed/relevant, delete this template call -->}} <!-- Categories go at the bottom of this page and interwikis go in Wikidata. --> The purpose of this module is to provide a simple method to do timing analysis for [[w:en:performance tuning|performance tuning]] of Lua-functions, that is [[w:en:Profiling (computer programming)|profiling of functions]]. Usually measuring the duration of function calls is part of a larger effort to identify bottlenecks and problematic code. This is not a full-blown profiler, as it is not possible to do line-tracing (call-graph profiling) in the current set up. Its only purpose is to measure execution time (flat profiling), and to do this interactively from the debug console (i.e. on a single function). ''As it is said several times in this text; the timing is not accurate and it should only be used as a coarse measure of the approximate run time and load of a function. Do not attempt to fine-tune a module to just fit into the maximum allowed time slot for a module, keep well below the limit!'' The timing analysis is called with an executable function, and optionally a count (size, default 100) of each test set and a number of such test sets (default 10). The total number of calls will be ''count × sets'' and gives the mean runtime for the function. The standard deviation is calculated from the number of ''sets'' and will only be a coarse estimate. If only run with a single set the standard deviation will go away, but even if it isn't measured the execution time will still vary. The profiler does two passes over the total calls, one with a dummy function and one with the actual function. This makes it possible to get a baseline for how heavy the additional code are, and then we can later subtract this from the measurement of the function we are testing. If the function under test is to simple those to execution times will be close and the uncertainty in the difference can be high. If the execution time is similar to the standard deviation, then the analysis should be rerun with larger or more sets. During development it became clear that 100 calls in 10 sets is usually enough to get good estimates, but do not put overly much weight on those numbers. If one function is twice, or tree times, or even ten times slower, never mind as long as it runs in constant or linear time. If something is 100 times slower or have indications that it runs in exponential time, then you should probably consider other algorithms.
Summary:
Please note that all contributions to Stockhub may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Stockhub:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)