out of memory when calculate large graph
Hi, I have several graph whose edge number is 6494, they are unweight and unlabeled graph. When I try to calculate the graph similarity, and error was raised as followed:
(base) symexe@ubuntu1804:~/CFGFactory$ python CFGenerator.py
/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/graph/__init__.py:24: UserWarning: Cannot import RDKit, `graph.from_rdkit()` will be unavailable.
'Cannot import RDKit, `graph.from_rdkit()` will be unavailable.\n'
Traceback (most recent call last):
File "CFGenerator.py", line 82, in <module>
main()
File "CFGenerator.py", line 76, in main
C._cfg_similarity()
File "CFGenerator.py", line 54, in _cfg_similarity
R = mlgk([Graph.from_networkx(g) for g in self._cfg_all])
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_kernel.py", line 241, in __call__
timer,
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_backend_cuda.py", line 314, in __call__
launch_block_count, max_graph_size, traits
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_backend_cuda.py", line 107, in allocate_pcg_scratch
n_temporaries
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_backend_cuda.py", line 85, in _allocate_scratch
PCGScratch(length, n_temporaries) for _ in range(number)
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_backend_cuda.py", line 85, in <listcomp>
PCGScratch(length, n_temporaries) for _ in range(number)
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_scratch.py", line 24, in __init__
super().__init__(capacity, n_temporaries, 16, 1, np.float32)
File "/home/symexe/miniconda3/lib/python3.7/site-packages/graphdot-0.8a10-py3.7.egg/graphdot/kernel/marginalized/_scratch.py", line 17, in __init__
self.data = gpuarray.empty(self.nrow * self.ncol, dtype)
File "/home/symexe/miniconda3/lib/python3.7/site-packages/pycuda/gpuarray.py", line 210, in __init__
self.gpudata = self.allocator(self.size * self.dtype.itemsize)
pycuda._driver.MemoryError: cuMemAlloc failed: out of memory
I followed the instructions in https://graphdot.readthedocs.io/en/latest/example/unweighted-nodelabeled.html , the only difference is I am using my graph at this time. I wonder if there is any API that can modify the batch size or something like that, so that we can limit the GPU consuming and avoid out of memory?