Python计算熵时出现的问题

huangapple go评论69阅读模式
英文:

Issue in python computing entropy

问题

I am trying to calculate the differential entropy (from information theory) but run into some issues in python. My attempt is the following:

我正在尝试计算差分熵(来自信息论),但在Python中遇到了一些问题。我的尝试如下:

I have the following differential entropy function:

我有以下的差分熵函数:

import numpy as np
from scipy.stats import norm
from scipy import integrate

def diff_entropy(nu, constant):
    
    def pdf_gaus_mixture(input):
        return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)
    
    def func(input):
        return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
    
    return integrate.quad(func, -np.inf, np.inf)[0]

I would like to compute the following:

我想要计算以下内容:

nu=0.1
beta=0.01
delta=0.1
sigma=0.01
diff_entropy(nu, np.sqrt(1/((beta/delta)+(sigma**2))))

But python is giving me the following errors:

但是Python给我以下错误:

<ipython-input-22-6267f1f9e56a>:7: RuntimeWarning: divide by zero encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
<ipython-input-22-6267f1f9e56a>:7: RuntimeWarning: invalid value encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
<ipython-input-22-6267f1f9e56a>:7: RuntimeWarning: overflow encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
<ipython-input-22-6267f1f9e56a>:9: IntegrationWarning: The occurrence of roundoff error is detected, which prevents 
  the requested tolerance from being achieved.  The error may be 
  underestimated.
  return integrate.quad(func, -np.inf, np.inf)[0]
nan

Issue: What am I doing wrong? I suspect the issue is due to the end points of the integral being negative and positive infinity. I can change it to something small like plus minus 10, but I am afraid of the loss in accuracy of the approximation. Is there a smarter way to overcome this? Thanks.

问题: 我做错了什么?我怀疑问题出在积分的端点是负无穷和正无穷。我可以将其更改为像正负10这样的小数,但我担心会损失近似的准确性。有没有更聪明的方法来解决这个问题?谢谢。

英文:

I am trying to calculate the differential entropy (from information theory) but run into some issues in python. My attempt is the following:

I have the following differential entropy function:

import numpy as np
from scipy.stats import norm
from scipy import integrate

def diff_entropy(nu, constant):

  def pdf_gaus_mixture(input):
    return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)

  def func(input):
    return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))

  return integrate.quad(func, -np.inf, np.inf)[0]

I would like to compute the following:

nu=0.1
beta=0.01
delta=0.1
sigma=0.01
diff_entropy(nu, np.sqrt(1/((beta/delta)+(sigma**2))))

But python is giving me the following errors:

&lt;ipython-input-22-6267f1f9e56a&gt;:7: RuntimeWarning: divide by zero encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
&lt;ipython-input-22-6267f1f9e56a&gt;:7: RuntimeWarning: invalid value encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
&lt;ipython-input-22-6267f1f9e56a&gt;:7: RuntimeWarning: overflow encountered in double_scalars
  return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))
&lt;ipython-input-22-6267f1f9e56a&gt;:9: IntegrationWarning: The occurrence of roundoff error is detected, which prevents 
  the requested tolerance from being achieved.  The error may be 
  underestimated.
  return integrate.quad(func, -np.inf, np.inf)[0]
nan

Issue: What am I doing wrong? I suspect the issue is due to the end points of the integral being negative and positive infinity. I can change it to something small like plus minus 10, but I am afraid of the loss in accuracy of the approximation. Is there a smarter way to overcome this? Thanks.

答案1

得分: 2

你嵌套的函数 func 多次将 0.0np.inf 相乘,这是未定义的。我修改了你的函数,发现了这一点:

def diff_entropy(nu, constant):
    
  def pdf_gaus_mixture(input):
    
    return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)
    
  def func(input):
    expr1 = pdf_gaus_mixture(input)
    expr2 = np.log(1 / pdf_gaus_mixture(input))
    
    if expr1 == 0:
        print(input, expr1, expr2)
    return expr1 * expr2
    
  return integrate.quad(func, -np.inf, np.inf)[0]

在技术上,你可以尝试循环计算并增加积分的下限和上限,直到 Python 将 0np.inf 相乘,也就是说,直到 Python 无法给出更准确的结果。我使用下面的代码来实现这一点。如果有用,请告诉我。

import numpy as np
from scipy.stats import norm
from scipy import integrate


def diff_entropy(nu, constant, lower_int_boundary, upper_int_boundary):

    def pdf_gaus_mixture(input):

        return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)

    def func(input):
        return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))

    return integrate.quad(func, lower_int_boundary, upper_int_boundary)[0]


nu=0.1
beta=0.01
delta=0.1
sigma=0.01
constant = np.sqrt(1/((beta/delta)+(sigma**2)))

lower_int_boundary = 0
upper_int_boundary = 0
step_size = 0.25
entropy_results = list()
boundaries = list()

while True:
    lower_int_boundary -= step_size
    upper_int_boundary += step_size

    entropy = diff_entropy(nu, constant, lower_int_boundary, upper_int_boundary)

    if np.isnan(entropy):
        break

    entropy_results.append(entropy)
    boundaries.append([lower_int_boundary, upper_int_boundary])

print(f"Most accurate entropy calculated: {entropy_results[-1]}")  # 1.6664093342815425
print(f"Boundaries used: {boundaries[-1]}")  # [-37.5, 37.5]
英文:

Your nested function func multiplies various times 0.0 by np.inf, which is undefined. I discovered this modifying your functions like this:

def diff_entropy(nu, constant):

  def pdf_gaus_mixture(input):

    return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)

  def func(input):
    expr1 = pdf_gaus_mixture(input)
    expr2 = np.log(1 / pdf_gaus_mixture(input))

    if expr1 == 0:
        print(input, expr1, expr2)
    return expr1 * expr2

  return integrate.quad(func, -np.inf, np.inf)[0]

Technically, you could try to loop over your calculations and increase the lower and upper boundaries of your integral until python multiplies 0 by np.inf, that is to say until python cannot give you a more accurate result. I used the code below to achieve this. Let me know if this was useful.

import numpy as np
from scipy.stats import norm
from scipy import integrate


def diff_entropy(nu, constant, lower_int_boundary, upper_int_boundary):

    def pdf_gaus_mixture(input):

        return (1-nu)*norm.pdf(input, loc=0, scale=1) + nu*norm.pdf(input, loc=constant, scale=1)

    def func(input):
        return pdf_gaus_mixture(input) * np.log(1 / pdf_gaus_mixture(input))

    return integrate.quad(func, lower_int_boundary, upper_int_boundary)[0]


nu=0.1
beta=0.01
delta=0.1
sigma=0.01
constant = np.sqrt(1/((beta/delta)+(sigma**2)))

lower_int_boundary = 0
upper_int_boundary = 0
step_size = 0.25
entropy_results = list()
boundaries = list()

while True:
    lower_int_boundary -= step_size
    upper_int_boundary += step_size

    entropy = diff_entropy(nu, constant, lower_int_boundary, upper_int_boundary)

    if np.isnan(entropy):
        break

    entropy_results.append(entropy)
    boundaries.append([lower_int_boundary, upper_int_boundary])

print(f&quot;Most accurate entropy calculated: {entropy_results[-1]}&quot;)  # 1.6664093342815425
print(f&quot;Boundaries used: {boundaries[-1]}&quot;)  # [-37.5, 37.5]

huangapple
  • 本文由 发表于 2023年5月10日 11:49:25
  • 转载请务必保留本文链接:https://go.coder-hub.com/76214762.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定