for the

The knapsack problem is a problem in combinatorial optimization that seeks to maximize the objective function (cid:80) ni =1 v i x i subject to the constraints (cid:80) ni =1 w i x i ≤ W and x i ∈ { 0 , 1 } , where x , v ∈ R n and W are provided. We consider the stochastic variant of this problem in which v remains deterministic, but x is an n -dimensional vector drawn uniformly at random from [0 , 1] n . We establish a suﬃcient condition under which the summation-bound condition is almost surely satisﬁed. Furthermore, we discuss the implications of this result on the deterministic problem


Introduction
The classical knapsack problem involves choosing a subset S of n items with values v 1 , . . ., v n and weights w 1 , . . ., w n such that the inequality i∈S w i ≤ W is satisfied and the summation i∈S v i is maximal.It is well-known that the knapsack problem is weakly NP complete due to a polynomial-time reduction from PARTITION and the existence of an O(nW ) pseudopolynomial dynamic programming algorithm.The stochastic variant of the problem is at least NP-hard since it can easily be generalized to the classical deterministic problem.
In [1], Morton and Wood introduce the same stochastic variant of the knapsack problem.Most notably, they provide various ways to maximize the probability that a total return threshold is met.In [2], Nagarajan summarizes recent results on the stochastic knapsack problem, and he discusses various extensions to the problem.Importantly, his discussion demonstrates the need to discover links between the deterministic and stochastic knapsack problems since there are open problems regarding how difficult each problem is.
In this paper, we adopt the standard convention of normalizing our weights to satisfy the sum constraint n i=1 w i = 1.This is done by simply performing componentwise division by n i=1 w i on the random vector w.

Results
Theorem 1.Let X 1 , X 2 , . . ., X n ∼ U(0, 1) be independent random variables, and define the random sum X = n i=1 X i .Furthermore, let µ = E[X] denote the expectation of the random sum.For any 0 ≤ δ ≤ 1, we have Proof.A standard application of the multiplicative Chernoff bound yields However, since this bound is difficult to work with, we will loosen it.A Taylor series expansion of log(1 + δ) yields 2δ 2+δ ≤ log(1 + δ), from which it immediately follows that This is the desired result, so we are done.
Theorem 2. Let I = {1, 2, . . ., n} be an indexing set, and let S ⊆ {1, 2, . . ., n} be a random set of indices drawn uniformly from P(I).For all 0 ≤ ≤ 1, an instance of the stochastic knapsack problem satisfies the inequality with probability p satisfying p ≤ 2 e n 2 /4 .Proof.Let w be fixed random vector in R n drawn uniformly at random from [0, 1] n , and let S be a random sum defined by S def = n i=1 w i , and define . By linearity of the expectation, Thus, by the preceding theorem, we obtain P (S < n/4) ≤ e −n 2 /6 , or equivalently, Now for a fixed random variable S satisfying Equation ( 2), the proportion of subsets for which Equation (1) holds is equal exactly to the probability that Equation ( 1) holds provided that we draw each subset from P(I) uniformly at random.Now since E i∈S w i is equal to the random variable S/2, we can apply the preceding theorem a second time in order to deduce But since S ≥ n/4 by assumption, the monotonicity of f (x) = e x implies the desired result.

Discussion
The implications of this result are fairly easy to see.Interestingly, this result demonstrates that for an instance of the stochastic knapsack problem, a random subset of {1, 2, . . ., n} will satisfy the knapsack sum constraint with probability 1 provided that W ≥ 1/2.Surprisingly, our results imply that it is also true that a random subset will not satisfy the sum constraint with probability 1 provided that W < 1/2.This result also has implications in the classical version of the knapsack problem.Particularly, even if w isn't chosen uniformly at random, our result would still hold as long as our w i s satisfy w i = O(1/n) for every index i.This is a direct consequence of an optimization of the generic Chernoff bound whose derivation is provided in [3].