numpy在vm.overcommit_memory=1时也不会过度占用内存。

huangapple go评论61阅读模式
英文:

numpy wont overcommit memory even when vm.overcommit_memory=1

问题

我遇到了一个 numpy 错误 numpy.core._exceptions.MemoryError 在我的代码中。我的机器上有足够的可用内存,所以这不应该是个问题。
(这是在树莓派 armv7l,4GB 上进行的)

$ free
              total        used        free      shared  buff/cache   available
Mem:        3748172       87636     3384520        8620      276016     3528836
Swap:       1048572           0     1048572

我找到了这篇帖子,建议我在内核中允许 overcommit_memory,所以我这样做了:

$ cat /proc/sys/vm/overcommit_memory
1

现在当我尝试运行这个示例时:

import numpy as np
arrays = [np.empty((18, 602, 640), dtype=np.float32) for i in range(200)]

我得到了相同的错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in <listcomp>
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32

为什么 Python(或 numpy)表现出这种方式,如何让它工作?

编辑:
回答回复中的问题:

这是一个 32 位系统(armv7l)

>>> sys.maxsize
2147483647

我打印了大约的大小(根据错误消息,每次迭代应该是 26.5MiB)在这个示例失败:

 def allocate_arr(i):
     print(i, i * 26.5)
     return np.empty((18, 602, 640), dtype=np.float32)

 arrays = [allocate_arr(i) for i in range(0, 200)]

输出显示,在分配了大约 3GB RAM 时出现了故障:

1 26.5
2 53.0
3 79.5
...
111 2941.5
112 2968.0
113 2994.5
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in <listcomp>
  File "<stdin>", line 3, in allocate_arr
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32

3GB 是极限吗?有增加的方法吗?
而且这不就是超额分配的目的吗?

英文:

I am running into a numpy error numpy.core._exceptions.MemoryError in my code. I have plenty of available memory in my machine so this shouldn't be a problem.
(This is on a raspberry pi armv7l, 4GB)

$ free
              total        used        free      shared  buff/cache   available
Mem:        3748172       87636     3384520        8620      276016     3528836
Swap:       1048572           0     1048572

I have found this post which suggested that I should allow overcommit_memory in the kernel, and so I did:

$ cat /proc/sys/vm/overcommit_memory
1

Now when I try to run this example:

import numpy as np
arrays = [np.empty((18, 602, 640), dtype=np.float32) for i in range(200)]

I get the same error:

Traceback (most recent call last):
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt;
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;listcomp&gt;
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32

Why is python (or numpy) behaving in that way and how can I get it to work?

EDIT:
Answers to questions in replies:

This is a 32bit system (armv7l)

&gt;&gt;&gt; sys.maxsize
2147483647

I printed the approximate size (according to the error message each iteration should be 26.5MiB) at which the example fails:

 def allocate_arr(i):
     print(i, i * 26.5)
     return np.empty((18, 602, 640), dtype=np.float32)

 arrays = [allocate_arr(i) for i in range(0, 200)]

The output shows that this fails below at around 3GB of RAM allocated:

1 26.5
2 53.0
3 79.5
...
111 2941.5
112 2968.0
113 2994.5
Traceback (most recent call last):
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt;
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;listcomp&gt;
  File &quot;&lt;stdin&gt;&quot;, line 3, in allocate_arr
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32

Is 3GB the limit? Is there a way to increase that?
Also isn't this the point of overcommitting?

答案1

得分: 2

默认情况下,32位Linux系统有3:1的用户/内核分割。也就是说,对于32位无符号整数可以寻址的4GB内存,有3GB保留给用户空间,而1GB保留给内核空间。因此,任何单个进程最多可以使用3GB内存。vm.overcommit设置与此无关,它涉及使用比实际物理内存支持的虚拟内存更多的情况。

以前Linux内核中有所谓的4G/4G支持(不确定这些补丁是否曾经合并到主线内核?),允许用户空间进程使用完整的4GB,并允许内核使用另外4GB的地址空间,但代价是性能较差(每次系统调用都要刷新TLB?)。但据我了解,随着所有对使用大量内存感兴趣的人很久以前都迁移到了64位系统,这些特性已经陈旧了。

英文:

By default 32-bit Linux has a 3:1 user/kernel split. That is, of the 4 GB one can address with a 32-bit unsigned integer, 3 GB is reserved for the user space but 1 GB is reserved for kernel space. Thus, any single process can use at most 3 GB memory. The vm.overcommit setting is not related to this, that is about using more virtual memory than there is actual physical memory backing the virtual memory.

There used to be so-called 4G/4G support in the Linux kernel (not sure if these patches were ever mainlined?), allowing the full 4 GB to be used by the user space process and another 4 GB address space by the kernel, at the cost of worse performance (TLB flush at every syscall?). But AFAIU these features have bitrotted as everyone who's interested in using lots of memory has moved to 64-bit systems a long time ago.

答案2

得分: 0

以下是您要翻译的内容:

"Others have exp similar issues in the past. Does the issue persist even on a 64 bit OS? It's possible that the issue is related to the fact that you are using a 32-bit system. On a 32-bit system, the maximum amount of addressable memory for any given process is 4GB. It is possible that the OS is reserving some of the address space for the kernel (1GB), which could explain why you are hitting the limit at around 3GB.

given as per your comments below that you are constrained in using 32 bit OS there are some additional things you may want to try I have not given 'em in comments since I just cannot format things easily over there the type space is pretty crunched

  • Increase swap space: On a 32-bit system, increasing the swap space can provide additional virtual memory. You can adjust the swap space size by modifying the configuration file /etc/dphys-swapfile on Raspberry Pi and then restarting the swap service. there is no guarantee this will work but given the short solutions give it a try

  • Split computations: If your computations involve large datasets or memory-intensive operations, consider splitting the task into smaller, manageable parts. Process data in smaller chunks or batches, perform computations incrementally, or use techniques like streaming or memory-mapped files to reduce the memory requirements at any given time.

  • Consider alternative libraries: If NumPy is too memory-intensive for your 32-bit system, you might explore alternative libraries that offer similar functionality but with lower memory requirements. For example, you could look into using Pandas with smaller datasets or exploring specialized libraries for specific tasks.

  • Raspberry Pi kernel is open source and can be recompiled where you would need to make changes to the kernel configuration file (/usr/src/linux/.config) and then recompile the kernel. Specifically, you would modify the value of the CONFIG_OVERCOMMIT_MEMORY option to the desired setting. Setting it to CONFIG_OVERCOMMIT_MEMORY=2 would allow unlimited memory overcommitment."

英文:

Others have exp similar issues in the past. Does the issue persist even on a 64 bit OS ? It's possible that the issue is related to the fact that you are using a 32-bit system. On a 32-bit system, the maximum amount of addressable memory for any given process is 4GB. It is possible that the OS is reserving some of the address space for the kernel ( 1GB) , which could explain why you are hitting the limit at around 3GB.

given as per your comments below that you are constrained in using 32 bit OS
there are some additional things you may want to try I have not given 'em in comments since I just cannot format things easily over there the type space is pretty crunched

  • Increase swap space: On a 32-bit system, increasing the swap space can provide additional virtual memory. You can adjust the swap space size by modifying the configuration file /etc/dphys-swapfile on Raspberry Pi and then restarting the swap service. there is no guarantee this will work but given the short solutions give it a try

  • Split computations: If your computations involve large datasets or memory-intensive operations, consider splitting the task into smaller, manageable parts. Process data in smaller chunks or batches, perform computations incrementally, or use techniques like streaming or memory-mapped files to reduce the memory requirements at any given time.

  • Consider alternative libraries: If NumPy is too memory-intensive for your 32-bit system, you might explore alternative libraries that offer similar functionality but with lower memory requirements. For example, you could look into using Pandas with smaller datasets or exploring specialized libraries for specific tasks.

  • raspberry Pi kernel is open source and can be recompiled where you would need to make changes to the kernel configuration file (/usr/src/linux/.config) and then recompile the kernel. Specifically, you would modify the value of the CONFIG_OVERCOMMIT_MEMORY option to the desired setting. Setting it to CONFIG_OVERCOMMIT_MEMORY=2 would allow unlimited memory overcommitment.

huangapple
  • 本文由 发表于 2023年5月29日 19:01:54
  • 转载请务必保留本文链接:https://go.coder-hub.com/76356773.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定