Example code - memoization: Difference between revisions

From 22112
Jump to navigation Jump to search
(Created page with "Memoization is an optimization technique, which remembers the result of CPU/time expensive function calls.<br> It is an example of the Golden Rule: Do not repeat work that has already been done.<br> You can also call the method for "caching of results". <pre> #!/usr/bin/env python3 # Demo of the effectivenes of caching import time class Compute: # Class variable used for caching cache = dict() def __init__(self, computeDelay): # Instance variab...")
 
No edit summary
 
Line 2: Line 2:
It is an example of the Golden Rule: Do not repeat work that has already been done.<br>
It is an example of the Golden Rule: Do not repeat work that has already been done.<br>
You can also call the method for "caching of results".
You can also call the method for "caching of results".
The main idea of the code is that there is an expensive method ''compute'' in the Compute class. The method is made artificially expensive by introducing a delay. Given a number the ''compute'' method delays the execution by '''number * delay''', meaning higher numbers give more delay. It then splits the number evenly and calls itself recursively with both splits. This continues until the number is 1.<br>
The user can decide to use a cache or not and see how that affects the runtime of the ''compute'' method which returns the number of seconds used.


<pre>
<pre>
#!/usr/bin/env python3
#!/usr/bin/env python3
# Demo of the effectivenes of caching
# Demo of the effectiveness of caching
import time
import time


Line 66: Line 69:
if __name__ == "__main__":
if __name__ == "__main__":
     mycompute = Compute(0.02)
     mycompute = Compute(0.02)
     print(mycompute.compute(200, False))
     print(mycompute.compute(200, useCache=False))
     print(mycompute.compute(200, True))
     print(mycompute.compute(200, useCache=True))
     print(mycompute.compute(200, True))
     print(mycompute.compute(200, useCache=True))
</pre>
</pre>

Latest revision as of 11:05, 17 June 2024

Memoization is an optimization technique, which remembers the result of CPU/time expensive function calls.
It is an example of the Golden Rule: Do not repeat work that has already been done.
You can also call the method for "caching of results".

The main idea of the code is that there is an expensive method compute in the Compute class. The method is made artificially expensive by introducing a delay. Given a number the compute method delays the execution by number * delay, meaning higher numbers give more delay. It then splits the number evenly and calls itself recursively with both splits. This continues until the number is 1.
The user can decide to use a cache or not and see how that affects the runtime of the compute method which returns the number of seconds used.

#!/usr/bin/env python3
# Demo of the effectiveness of caching
import time

class Compute:
    # Class variable used for caching
    cache = dict()
    
    def __init__(self, computeDelay):
        # Instance variable to determine the difficulty of the computing (a time in sec)
        if isinstance(computeDelay, (int, float)):
            self.computeDelay = computeDelay
        else:
            raise ValueError("Number in seconds expected")
    
    def compute(self, crunch, useCache = False):
        if not isinstance(crunch, int):
            raise ValueError("Integer expected")
        if not isinstance(useCache, bool):
            raise ValueError("Boolean expected")
        start = time.time()
        if useCache:
            result = self._computeCache(crunch)
        else:
            result = self._compute(crunch)
        seconds = time.time() - start
        return result, seconds 
        

    def _compute(self, crunch):
        # Make a delay according to the size of the input to illustrate a complex compute
        time.sleep(crunch * self.computeDelay)
        if crunch == 1:
            # Minimum task
            result = 1
        else:
            # Split task up, use recursion
            c1 = int(crunch/2)
            c2 = crunch - c1
            result = self._compute(c1) + self._compute(c2)
        return result

   def _computeCache(self, crunch):
        # Check if already calculated and in therefore in cache
        if crunch in self.cache:
            return self.cache[crunch]
        # Make a delay according to the size of the input to illustrate a complex compute
        time.sleep(crunch * self.computeDelay)
        if crunch == 1:
            # Minimum task
            result = 1
        else:
            # Split task up, use recursion
            c1 = int(crunch/2)
            c2 = crunch - c1
            result = self._computeCache(c1) + self._computeCache(c2)
        # New result, put in cache 
        self.cache[crunch] = result
        return result

if __name__ == "__main__":
    mycompute = Compute(0.02)
    print(mycompute.compute(200, useCache=False))
    print(mycompute.compute(200, useCache=True))
    print(mycompute.compute(200, useCache=True))