为什么这个算法的时间复杂度是O(N^2 * K),将N个人分成K组?

huangapple go评论68阅读模式
英文:

Divide N people into K groups: Why is the big O of this algorithim O(N^2 * K)?

问题

问题的描述和解决方案可以在这里找到:

https://www.geeksforgeeks.org/count-the-number-of-ways-to-divide-n-in-k-groups-incrementally/

基本上,问题是给定 N 个人,将它们分成 K 组,使得每一组的人数都大于等于前一组的人数,有多少种方式可以实现?

解决方案是通过递归遍历每一种可能性,而通过动态规划,可以将复杂度从 O(N^k) 降低到 O(N^2 * K)。

我理解旧递归解决方案的复杂性,但难以理解为什么动态规划解决方案的复杂性为 O(N^2 * K)。如何得出关于动态规划解决方案时间复杂性的结论?对于此问题的任何帮助将不胜感激!

英文:

The description of the problem and it's solution(s) can be found here

https://www.geeksforgeeks.org/count-the-number-of-ways-to-divide-n-in-k-groups-incrementally/

Basically the problem is given N people, how many ways can you divide them into K groups, such that each group is greater than or equal in number of people to the one that came before it?

The solution is to recurse through every possibility, and it's complexity can be cut down from O(N<sup>K</sup>) to O(N<sup>2</sup> * K) through dynamic programming.

I understand the complexity of the old recursive solution, but have trouble understanding why the dynamic programming solution has O(N<sup>2</sup> * K) complexity. How does one come to this conclusion about the dynamic programming solution's time complexity? Any help would be appreciated!

答案1

得分: 3

首先,大O表示法给我们一个关于两个函数t(n)/i(n)n趋向无穷大时关系的想法。更具体地说,它是这种关系的上界,这意味着它是f(n) >= t(n)/i(n)。这里,t(n)代表执行时间增长的速度,i(n)描述输入增长的速度。在函数空间中(我们在那里使用函数而不是数字,并且几乎像数字一样处理函数:我们可以进行除法或比较,例如),两个元素之间的关系也是一个函数。因此,t(n)/i(n)是一个函数。

其次,有两种确定该关系上界的方法。

科学观察方法涉及以下步骤。让我们看看执行一个带有10个输入的算法需要多少时间。然后逐步增加输入到100个,然后到1000个,依此类推。输入增长的速度i(n)是指数级的(10^1, 10^2, 10^3, ...)。假设,我们同样得到了时间的指数级增长速度(10^1 秒,10^2 秒,10^3 秒,...)。

这意味着 t(n)/i(n) = exp(n)/exp(n) = 1,当 n 趋向无穷大时(为了科学纯粹起见,我们只能在 n 趋向无穷大时对函数进行除法和比较,尽管这对于方法的实用性并不意味着什么)。至少可以说(记住这是一个上界),我们算法的执行时间不会比其输入增长得更快。我们可能得到了二次指数级时间增长速度。在这种情况下,t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n),a > 1,当 n 趋向无穷大时,这意味着我们的时间复杂度是O(exp(n)),大O表示法只是提醒我们这不是一个紧密的上界。同时,值得指出的是,我们选择增长速度的方式无关紧要。我们可能想要线性增加输入。然后 t(n)/i(n) = exp(n)*n/n = exp(n) 会与 t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n),a > 1 表达相同。这里重要的是商。

第二种方法是理论方法,主要用于相对明显的情况分析。假设我们有来自示例的一段代码:

(以下代码部分不翻译)

这里需要注意的第一件事是3维数组dp。它提示了动态规划算法的时间复杂度,因为通常我们会遍历它一次。然后我们关心数组的大小。它的大小被初始化为500*500*500,这并没有给我们太多信息,因为500是一个数字,不是一个函数,严格来说,它不依赖于输入变量。尽管如此,这是为了简单起见。实际上,dp 的大小为 k*n*n,假设 k <= 500n <= 500

让我们证明一下。方法 static int calculate(int pos, int prev, int left, int k)k 保持不变时有三个实际变量 posprevleftpos 的范围是 0 到 k,因为它从 0 开始,这里 return calculate(0, 1, n, k);,并且基本情况是 if (pos == k)prev 的范围是 1 到 left,因为它从 prev 开始,通过 for (int i = prev; i <= left; i++) 迭代到 left。最后,left 的范围是 n 到 0,因为它从 n 开始,这里 return calculate(0, 1, n, k);,并且通过 for (int i = prev; i <= left; i++) 向下迭代到 0。总之,posprevleft 的可能组合数量就是它们的乘积 k*n*n

第二件事是证明每个 posprevleft 的范围只被遍历一次。通过分析以下代码块可以从代码中确定这一点:

for (int i = prev; i <= left; i++)  
{ 
    answer += calculate(pos + 1, i,  
                       left - i, k); 
}

这三个变量都仅在这里发生变化。pos0 增长,每次增加 1。在每个特定的 pos 值上,prev 通过从 prev 增加 1left 进行更改,在每个特定的 posprev 值的组合上,left 通过从 left 中减去范围为 prev 到 lefti 进行更改。

这种方法的思想是,一旦我们按某种规则迭代一个输入变量,我们就得到了相应的时间复杂度。例如,我们可以按逐渐减小范围的方式迭代一个变量,那么我们将得到对数级的复杂度。或者我们可以迭代输入的每个元素,那么我们将得到线性复杂度。

换句话说,我们毫无疑问地假设每个算法

英文:

First of all, big O notation gives us an idea about the relation between two functions t(n)/i(n) when n -> infinity. To be more specific, it's an upper bound for such relation, which means it's f(n) &gt;= t(n)/i(n). t(n) stands for the speed of growth of time spent on execution, i(n) describes the speed of growth of input. In function space (we work with functions there rather than with numbers and treat functions almost like numbers: we can divide or compare them, for example) the relation between two elements is also a function. Hence, t(n)/i(n) is a function.

Secondly, there are two methods of determining bounds for that relation.

The scientific observational approach implies the next steps. Let's see how much time it takes to execute an algorithm with 10 pieces of input. Then let's increase the input up to 100 pieces, and then up to 1000 and so on. The speed of growth of input i(n) is exponential (10^1, 10^2, 10^3, ...). Suppose, we get the exponential speed of growth of time as well (10^1 sec, 10^2 sec, 10^3 sec, ... respectively).

That means t(n)/i(n) = exp(n)/exp(n) = 1, n -> infinity (for the scientific purity sake, we can divide and compare functions only when n -> infinity, it doesn't mean anything for the practicality of the method though). We can say at least (remember it's an upper bound) the execution time of our algorithm doesn't grow faster than its input does. We might have got, say, the quadratic exponential speed of growth of time. In that case t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n), a &gt; 1, n -> infinity, which means our time complexity is O(exp(n)), big O notation only reminds us that it's not a tight bound. Also, it's worth pointing out that it doesn't matter which speed of growth of input we choose. We might have wanted to increase our input linearly. Then t(n)/i(n) = exp(n)*n/n = exp(n) would express the same as t(n)/i(n) = exp^2(n)/exp(n) = a^2n/a^n = exp(n), a &gt; 1. What matters here is the quotient.

The second approach is theoretical and mostly used in the analysis of relatively obvious cases. Say, we have a piece of code from the example:

// DP Table 
static int [][][]dp = new int[500][500][500]; 
   
// Function to count the number 
// of ways to divide the number N 
// in groups such that each group 
// has K number of elements 
static int calculate(int pos, int prev,  
                 int left, int k) 
{ 
    // Base Case 
    if (pos == k)  
    { 
        if (left == 0) 
            return 1; 
        else
            return 0; 
    } 
    
    // if N is divides completely  
    // into less than k groups 
    if (left == 0) 
        return 0; 
   
    // If the subproblem has been 
    // solved, use the value 
    if (dp[pos][prev][left] != -1) 
        return dp[pos][prev][left]; 
   
    int answer = 0; 
    
    // put all possible values  
    // greater equal to prev 
    for (int i = prev; i &lt;= left; i++)  
    { 
        answer += calculate(pos + 1, i,  
                           left - i, k); 
    } 
   
    return dp[pos][prev][left] = answer; 
} 

// Function to count the number of  
// ways to divide the number N in groups 
static int countWaystoDivide(int n, int k) 
{ 
    // Intialize DP Table as -1 
        for (int i = 0; i &lt; 500; i++)  
        { 
            for (int j = 0; j &lt; 500; j++) 
            { 
                for (int l = 0; l &lt; 500; l++) 
                    dp[i][j][l] = -1; 
            } 
        } 
   
    return calculate(0, 1, n, k); 
} 

The first thing to notice here is a 3-d array dp. It gives us a hint of the time complexity of a DP algorithm because usually, we traverse it once. Then we are concerned about the size of the array. It's initialized with the size 500*500*500 which doesn't give us a lot because 500 is a number, not a function, and it doesn't depend on the input variables, strictly speaking. It's done for the sake of simplicity though. Effectively, the dp has size of k*n*n with assumption that k &lt;= 500 and n &lt;= 500.

Let's prove it. Method static int calculate(int pos, int prev, int left, int k) has three actual variables pos, prev and left when k remains constant. The range of pos is 0 to k because it starts from 0 here return calculate(0, 1, n, k); and the base case is if (pos == k), the range of prev is 1 to left because it starts from 1 and iterates through up to left here for (int i = prev; i &lt;= left; i++) and finally the range of left is n to 0 because it starts from n here return calculate(0, 1, n, k); and iterates through down to 0 here for (int i = prev; i &lt;= left; i++). To recap, the number of possible combinations of pos, prev and left is simply their product k*n*n.

The second thing is to prove that each range of pos, prev and left is traversed only once. From the code, it can be determined by analysing this block:

for (int i = prev; i &lt;= left; i++)  
{ 
    answer += calculate(pos + 1, i,  
                       left - i, k); 
}

All the 3 variables get changed only here. pos grows from 0 by adding 1 on each step. On each particular value of pos, prev gets changed by adding 1 from prev up to left, on each particular combination of values of pos and prev, left gets changed by subtracting i, which has the range prev to left, from left.

The idea behind this approach is once we iterate over an input variable by some rule, we get corresponding time complexity. We could iterate over a variable stepping on elements by decreasing the range by twice on each step, for example. In that case, we would get logarithmical complexity. Or we could step on every element of the input, then we would get linear complexity.

In other words, we without any doubts assume the minimum time complexity t(n)/i(n) = 1 for every algorithm from common sense. Meaning that t(n) and i(n) grow equally fast. That also means we do nothing with the input. Once we do something with the input, t(n) becomes f(n) times bigger than i(n). By the logic shown in the previous lines, we need to estimate f(n).

huangapple
  • 本文由 发表于 2020年8月28日 10:36:11
  • 转载请务必保留本文链接:https://go.coder-hub.com/63626681.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定