Example code - memoization

From 22112
Revision as of 11:05, 17 June 2024 by WikiSysop (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Memoization is an optimization technique, which remembers the result of CPU/time expensive function calls.
It is an example of the Golden Rule: Do not repeat work that has already been done.
You can also call the method for "caching of results".

The main idea of the code is that there is an expensive method compute in the Compute class. The method is made artificially expensive by introducing a delay. Given a number the compute method delays the execution by number * delay, meaning higher numbers give more delay. It then splits the number evenly and calls itself recursively with both splits. This continues until the number is 1.
The user can decide to use a cache or not and see how that affects the runtime of the compute method which returns the number of seconds used.

#!/usr/bin/env python3
# Demo of the effectiveness of caching
import time

class Compute:
    # Class variable used for caching
    cache = dict()
    
    def __init__(self, computeDelay):
        # Instance variable to determine the difficulty of the computing (a time in sec)
        if isinstance(computeDelay, (int, float)):
            self.computeDelay = computeDelay
        else:
            raise ValueError("Number in seconds expected")
    
    def compute(self, crunch, useCache = False):
        if not isinstance(crunch, int):
            raise ValueError("Integer expected")
        if not isinstance(useCache, bool):
            raise ValueError("Boolean expected")
        start = time.time()
        if useCache:
            result = self._computeCache(crunch)
        else:
            result = self._compute(crunch)
        seconds = time.time() - start
        return result, seconds 
        

    def _compute(self, crunch):
        # Make a delay according to the size of the input to illustrate a complex compute
        time.sleep(crunch * self.computeDelay)
        if crunch == 1:
            # Minimum task
            result = 1
        else:
            # Split task up, use recursion
            c1 = int(crunch/2)
            c2 = crunch - c1
            result = self._compute(c1) + self._compute(c2)
        return result

   def _computeCache(self, crunch):
        # Check if already calculated and in therefore in cache
        if crunch in self.cache:
            return self.cache[crunch]
        # Make a delay according to the size of the input to illustrate a complex compute
        time.sleep(crunch * self.computeDelay)
        if crunch == 1:
            # Minimum task
            result = 1
        else:
            # Split task up, use recursion
            c1 = int(crunch/2)
            c2 = crunch - c1
            result = self._computeCache(c1) + self._computeCache(c2)
        # New result, put in cache 
        self.cache[crunch] = result
        return result

if __name__ == "__main__":
    mycompute = Compute(0.02)
    print(mycompute.compute(200, useCache=False))
    print(mycompute.compute(200, useCache=True))
    print(mycompute.compute(200, useCache=True))