By DarthWho and Codeguy

Introduction

The first and most important thing to remember before optimizing for speed is to first get the program working and then get it working right before performing even a single optimization this does not mean that you have to find and eliminate every single bug in the program it means that you must get rid of all the major glitches in the program all of the ones that people are likely to run across certainly the ideal is to eliminate all glitches however it is unlikely that in anything much more complex than “Hello World” it would be inordinately time consuming to fix all of the glitches. So when you have it working right that 99.9+% of the time then it is time to move on to optimization.

Focusing your Efforts:

Now it does not do you much good to optimize a function that is only used 1%-5% of the time. Most of your effort should go into optimizing the areas that consume the most processor time. The areas that tend to take the most processor time are those which are used most often. So focus your efforts on the areas that are most commonly used.

To give an example if you have a routine that originally uses 10% of the processing time and cut that time down to an eighth of the previous time you only cut the execution time down by 8.75% of the execution time so a program that is optimized like that which would normally run for 5 minutes would instead run for 4 minutes and 34 seconds not much of a boost now is it? However if you have a routine that accounts for 50% of the processing time and make that run twice as fast you save 25% of the time or in that same 5 minute instance it would run to completion in 3 minutes and 45 seconds a noticeable difference isn’t it? Now if you are going for all out optimization and combine both optimizations (remember they are separate routines) you get it to run its course in just 3 minutes and 19 seconds. However the single largest boost in speed came from the part that was only doubled in speed because it took more processing time so focus your efforts where there is a lot of processing time.

Specific Areas of Speed Optimization:

Structural

Structural optimization may not necessarily result in an execution speed increase, but should be considered another means to keep code organized and neat. This is primarily the effective use of structure statements and logical blocks. While this may not make a program faster, it will make maintenance and changes far easier for the programmer, whether it is you changing or adding features or modifying code to be used as part of another program. The time you take now to ensure good program structure will be time well spent. There is no point to optimizing code that is utterly unreadable. Covered in this section will be the use of IF, DO/LOOP, WHILE/WEND, FOR/NEXT and SELECT CASE blocks. In other words all the looping and logical blocks. In this you will first find a programs that demonstrates why using nested IF statements are far faster than what i call one-liners.

First percentage is for (variables a,b,c,d,e,f)=0

Second percentage is for (variables a,b,c,d,e,f)=1

Third percentage is for (variables a,b,c,d,e,f) set randomly

Take this code:'* many people would believe this IF line will terminate after the first false,

'* but it does not as this demo will show.'_DEFINE A-Z AS INTEGER

'* one-liner if wins -- about 5% faster,5% faster,5% faster

'_DEFINE A-Z AS SINGLE

'* nested if wins -- about 35% faster (all 0), 16% faster all 1,23% faster random

'_DEFINE A-Z AS LONG

'* nested if wins -- about 5% faster (all 0), about even ,4% faster random

'_DEFINE A-Z AS DOUBLE

'* nested if wins -- about 20% faster (all 0), about 17% faster,26% random

_DEFINE A-Z AS _FLOAT

'* nested if wins -- about 70% faster (all 0), about 58% faster,68% random

'* many people would believe this IF line will terminate after the first false,

'* but it does not as this demo will show.

FOR pass = 0 TO 2

SELECT CASE pass

CASE 0

a = 0

b = 0

c = 0

d = 0

e = 0

f = 0

CASE 1

a = 1

b = 1

c = 1

d = 1

e = 1

f = 1

CASE 2

a = INT(RND * 2)

b = INT(RND * 2)

c = INT(RND * 2)

d = INT(RND * 2)

e = INT(RND * 2)

f = INT(RND * 2)

END SELECT

x& = 0

FOR Trials = 0 TO 3

x& = 0

NTimesOneLine& = 0

TStart! = TIMER(.001)

DO

IF a AND b AND c AND d AND e AND f THEN x& = x& + 1

NTimesOneLine& = NTimesOneLine& + 1

LOOP UNTIL ABS(TIMER(.001) - TStart!) >= 1

'* now we add the structured version, which while it takes more lines of code,

'* returns an iteration count at least 20 percent greater than the previous

'* example.

x& = 0

NTimesNested& = 0

NestStart! = TIMER(.001)

DO

IF a THEN

IF b THEN

IF c THEN

IF d THEN

IF e THEN

IF f THEN

x& = x& + 1

END IF

END IF

END IF

END IF

END IF

END IF

NTimesNested& = NTimesNested& + 1

LOOP UNTIL ABS(TIMER(.001) - NestStart!) >= 1

PRINT "Number of iterations using one-line"; NTimesOneLine&

PRINT "Number of iterations using nested if"; NTimesNested&

PRINT NTimesOneLine& - NTimesNested&, NTimesNested& / NTimesOneLine&

NEXT

NEXT

So in conclusion, the only time one-line if consistently wins is working with integers. Keep this in mind when using IF statements. I still prefer the ease of reading nested IF statements. Using this structure will make your programs longer in line counts, but overall, shorter in execution where non-integers are part of the program. very few programs in real-world situations deal entirely with integers.

Mathematical

This is perhaps the most esoteric area of Optimization for a lot of people it does not make sense that say (I MOD 2)*(J MOD 2) has a faster cousin which is completely equivalent: (I*J) AND 1 in several instances there is no way that you can beat a mathematical optimization speed wise. perhaps the single biggest instance is in summation:

Before:

DIM SUM AS SINGLE

L=0

H=100000

FOR I= L TO H

SUM = SUM+I

NEXT I

PRINT SUM

After:

DIM SUM AS SINGLE

L=0

H=100000

SUM=(H-L+1)*(H+L)/2

if you test those two side by side it is clear that the summation equation is significantly faster.

Another place where you can speed things up is to first determine the percentage of an array that you wish to randomly fill let us say 30% maximum for say a Raycaster maze

well most people would write a structure that is something like this

FOR i=1 TO 1000

FOR k=1 TO 1000

IF RND<=.3THEN

array%(i,k)=RND*7

END IF

NEXT K

NEXT I

Instead you can approach it statistically

FOR i%=1 TO 300000

Array%(1000*RND, 1000*RND)= RND*7

NEXT i%

Statistically that is equivalent to the other routine.

Another area of mathematical optimization is Boolean optimization why use the statement: NOT((NOT A) OR (NOT B)) when A AND B is completely equivalent or conversely: NOT((NOT A) AND (NOT B)) when A OR B is exactly the same thing.

A small table of equivalents:

NOT((NOT A) OR (NOT B))= A AND B

NOT((NOT A) AND (NOT B))= A OR B

NOT((NOT A) AND B) = A OR (NOT B)

(NOT A) XOR (NOT B)= A XOR B

There are others that exist try to work them out through either truth tables or Boolean Algebra either will work.

Now that that has been covered we can cover Boolean Hybrid statements.

Now normally one would implement a complex decision based structure on either an IF THEN ELSEIF or a SELECT CASE Structure. One such system

IF K=3 THEN

D=15

ELSEIF K>3 THEN

D=33.2*H

ELSEIF K<3 THEN

D=D/2

END IF

one could gain some speed by putting that in select case form but say you wanted it to be able to handle more complex statements incurring a differing number of variables for each determiner essentially Adaptive behavior and have it be able to trigger multiple commands at once? Then you need the Speedy Boolean Hybrid.

try this with a suite of different values it will behave exactly as the IF THEN statement above.

D=(-D/2)*(K<3)-(33.2*H)*(K>3)-15*(K=3)

however this method can be extremely flexible as an example:

D=((K MOD Z)*((M*Z MOD 2)=(D*2))+(Z MOD M)*(K=D))*(G=-1)-((M-Z+1)*(M+Z)/2)*(G=0)

fairly complicated expression that is even more difficult to reproduce with an IF THEN ELSEIF and even more difficult to reproduce with SELECT CASE

however if you have something which is nested inside several loops you can gain even more speed:

D=((K MOD Z)*(M*Z MOD 2=D*2)+(Z MOD M)*(K=D))*(G=-1)-((M-Z+1)*(M+Z)/2)*(G=0)

becomes

'in the middle

GG=(G=0)

An important thing to note is that any speed saving here is only discernable when dealing with long complicated expressions and you run the risk of introducing bugs.

Algorithmic

Now it may seem that algorithmic and mathematical optimization are highly similar and in fact they are algorithms in computer programming can be mathematical or other wise. There are string based algorithms and strings have their own speed optimizations an algorithmic optimization is choosing the best way to achieve this is by researching to see what the best method to do the job is.

Logical

Logical optimizations consist of finding the locations where artifacts from older versions of the code exist and no longer serve any purpose after all why keep calculating Ackermann's Function if you are no longer using it? Essentially the theory behind this is finding out the parts of the code that are no longer needed or actually used. I would suggest backing up your code before getting rid of anything. then in a working copy of the code get rid of the code you do not think you need and test the program.

Pre-calculation

Yes the magic of pre-calculation perhaps the only method that can improve the speed of any program that uses complex functions. essentially the method behind pre-calculation is to decide an array range that will give you sufficient detail for your function to be of use to the program. Then when you call the array that contains the value you are looking for you make the adjustment say your original function range is 0 to 10 and all real numbers in between and you only need .1 for the precision then you would have Funcarray(0 to 100) and you would multiply all values before they were placed into the array part by 10 and it would give you the value you want presuming you have placed the value into the array first.

Assembly

Assembly or ASM is a programming language which can be inserted inline in most other programming languages ASM is the closest that one can get to machine code without having to remember immense strings of numbers while coding. As such ASMis considered by many the ultimate way to optimize however hand written ASM has 2 major problems the first is that hand written ASM can be slower than the code written by the compiler the reason for this is that people often make the ASM code much more complex than it really needs to be. The second major problem with hand written ASM is that it often make the code impossible to read.