You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A memory leak is observed when using the KVEmbedding class with Python version 3.10.*. The same code does not exhibit the memory leak issue when running on Python 3.8.11. The issue may arise due to differences in how Python 3.10.* handles memory allocation, deallocation, or compatibility with the libraries used.
Setup:
Environment:
Python 3.8.11 (No memory leak observed)
Python 3.10.* (Memory leak occurs)
Dependencies:
tokenizers==0.20.3
torch==2.0.1+cu117
torchvision==0.15.2+cu117
tqdm==4.67.0
transformers==4.46.0
Attempts to Resolve:
We tried various strategies to address the memory leak, but none were successful. These include:
Explicit Garbage Collection:
Used gc.collect() to manually invoke garbage collection after each batch.
Variable Deletion:
Explicitly deleted intermediate variables with del to release memory.
CUDA Cache Management:
Used torch.cuda.empty_cache() to free up GPU memory.
Library Versions:
Tried multiple versions of tokenizers and transformers libraries but observed no improvement.
Despite these efforts, the memory leak persisted in Python 3.10.*.
Call for Assistance: We have exhausted our efforts to identify and resolve the memory leak issue. If anyone with expertise in Python memory management, PyTorch, or Hugging Face Transformers can assist, we would greatly appreciate your help
System Info
A memory leak is observed when using the
KVEmbedding
class with Python version3.10.*
. The same code does not exhibit the memory leak issue when running on Python3.8.11
. The issue may arise due to differences in how Python3.10.*
handles memory allocation, deallocation, or compatibility with the libraries used.Setup:
Environment:
3.8.11
(No memory leak observed)3.10.*
(Memory leak occurs)Dependencies:
tokenizers==0.20.3
torch==2.0.1+cu117
torchvision==0.15.2+cu117
tqdm==4.67.0
transformers==4.46.0
Attempts to Resolve:
We tried various strategies to address the memory leak, but none were successful. These include:
gc.collect()
to manually invoke garbage collection after each batch.del
to release memory.torch.cuda.empty_cache()
to free up GPU memory.Tried multiple versions of tokenizers and transformers libraries but observed no improvement.
Despite these efforts, the memory leak persisted in Python
3.10.*
.Call for Assistance: We have exhausted our efforts to identify and resolve the memory leak issue. If anyone with expertise in Python memory management, PyTorch, or Hugging Face Transformers can assist, we would greatly appreciate your help
Who can help?
@sgugger @thomwolf @ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
No memory leaks occur on Python 3.10.*.
The text was updated successfully, but these errors were encountered: