Hi,
Please see the output of memory_profiler below (Example: first instance of call to predict() with 2.3GB to nth instance with 30.6GB). As mentioned in my previous comment, it was going up to ~80GB and counting up. Sorry I could not share the code. But I can tell you that it is pretty straightforward to recreate. Any help would be appreciated. Thanks!
###############################################
#First Instance of leakage (@predict() highlighted below as bold):
Line # Mem usage Increment Occurrences Line Contents
=============================================================
29 2337.5 MiB 2337.5 MiB 1 @profile
30 def predict_hpwl(graph, graph_label, model):
31 2337.5 MiB 0.0 MiB 1 lindex = range(len([graph_label]))
32 2337.6 MiB 0.0 MiB 2 gdata = DataGenerator("Prediction",graphs=[graph],
33 2337.5 MiB 0.0 MiB 1 labels=[graph_label],
34 2337.5 MiB 0.0 MiB 1 indices=lindex,
35 2337.5 MiB 0.0 MiB 1 shuffle=True,
36 2337.5 MiB 0.0 MiB 1 cache_size=10,
37 2337.5 MiB 0.0 MiB 1 debug=False,
38 2337.5 MiB 0.0 MiB 1 isSparse=True)
39
40 ## Test the GNN
41 2487.5 MiB 149.9 MiB 2 hpwl = GNN.predict(gdata,
42 2337.6 MiB 0.0 MiB 1 max_queue_size=10,
43 2337.6 MiB 0.0 MiB 1 workers=8,
44 2337.6 MiB 0.0 MiB 1 use_multiprocessing=True
45 )
46
47
48 2486.5 MiB -1.0 MiB 1 keras.backend.clear_session()
49
50
51 2486.5 MiB 0.0 MiB 1 return hpwl
###############################################
#n'th Instance of leakage (@predict() highlighted below as bold):
Line # Mem usage Increment Occurrences Line Contents
=============================================================
29 30661.9 MiB 30661.9 MiB 1 @profile
30 def predict_hpwl(graph, graph_label, model):
31 30661.9 MiB 0.0 MiB 1 lindex = range(len([graph_label]))
32 30661.9 MiB 0.0 MiB 2 gdata = DataGenerator("Prediction",graphs=[graph],
33 30661.9 MiB 0.0 MiB 1 labels=[graph_label],
34 30661.9 MiB 0.0 MiB 1 indices=lindex,
35 30661.9 MiB 0.0 MiB 1 shuffle=True,
36 30661.9 MiB 0.0 MiB 1 cache_size=10,
37 30661.9 MiB 0.0 MiB 1 debug=False,
38 30661.9 MiB 0.0 MiB 1 isSparse=True)
39
40 ## Test the GNN
41 30720.0 MiB 58.1 MiB 2 hpwl = GNN.predict(gdata,
42 30661.9 MiB 0.0 MiB 1 max_queue_size=10,
43 30661.9 MiB 0.0 MiB 1 workers=8,
44 30661.9 MiB 0.0 MiB 1 use_multiprocessing=True
45 )
46
47
48 30720.0 MiB -0.0 MiB 1 keras.backend.clear_session()
49
50
51 30720.0 MiB 0.0 MiB 1 return hpwl