1 Introduction
A general demixing problem is to estimate the quantities or concentrations of the individual components of some observed mixture. Often a linear mixture model is assumed
[1]. In this case the observed mixture is modeled as a linear combination of references for each component known to possibly be in the mixture. If we put these references in the columns of a dictionary matrix , then the mixing model is simply . Physical constraints often mean that should be nonnegative, and depending on the application we may also be able to make sparsity assumptions about the unknown coefficients . This can be posed as a basis pursuit problem where we are interested in finding a sparse and perhaps also nonnegative linear combination of dictionary elements that match observed data. This is a very well studied problem. Some standard convex models are nonnegative least squares (NNLS) [2, 3],(1) 
and methods based on minimization [4, 5, 6]. There are also variations that enforce different group sparsity assumptions on [7, 8, 9].
In this paper we are interested in how to deal with uncertainty in the dictionary. The case when the dictionary is unknown is dealt with in sparse coding and nonnegative matrix factorization (NMF) problems [10, 11, 12, 13, 14, 15], which require learning both the dictionary and a sparse representation of the data. We are, however, interested in the case where we know the dictionary but are uncertain about each element. One example we will study in this paper is differential optical absorption spectroscopy (DOAS) analysis [16], for which we know the reference spectra but are uncertain about how to align them with the data because of wavelength misalignment. Another example we will consider is hyperspectral unmixing [17, 18, 19]. Multiple reference spectral signatures, or endmembers, may have been measured for the same material, and they may all be slightly different if they were measured under different conditions. We may not know ahead of time which one to choose that is most consistent with the measured data. Although there is previous work that considers noise in the endmembers [20]
and represents endmembers as random vectors
[21], we may not always have a good general model for endmember variability. For the DOAS example, we do have a good model for the unknown misalignment [16], but even so, incorporating it may significantly complicate the overall model. Therefore for both examples, instead of attempting to model the uncertainty, we propose to expand the dictionary to include a representative group of possible elements for each uncertain element as was done in [22].The grouped structure of the expanded dictionary is known by construction, and this allows us to make additional structured sparsity assumptions about the corresponding coefficients. In particular, the coefficients should be extremely sparse within each group of representative elements, and in many cases we would like them to be at most sparse. We will refer to this as intra group sparsity. If we expected sparsity of the coefficients for the unexpanded dictionary, then this will carry over to an inter group sparsity assumption about the coefficients for the expanded dictionary. By inter group sparsity we mean that with the coefficients split into groups, the number of groups containing nonzero elements should also be sparse. Modeling structured sparsity by applying sparsity penalties separately to overlapping subsets of the variables has been considered in a much more general setting in [8, 23].
The expanded dictionary is usually an underdetermined matrix with the property that it is highly coherent because the added columns tend to be similar to each other. This makes it very challenging to find good sparse representations of the data using standard convex minimization and greedy optimization methods. If satisfies certain properties related to its columns not being too coherent [24], then sufficiently sparse nonnegative solutions are unique and can therefore be found by solving the convex NNLS problem. These assumptions are usually not satisfied for our expanded dictionaries, and while NNLS may still be useful as an initialization, it does not by itself produce sufficiently sparse solutions. Similarly, our expanded dictionaries usually do not satisfy the incoherence assumptions required for minimization or greedy methods like Orthogonal Matching Pursuit (OMP) to recover the sparse solution [25, 26]. However, with an unexpanded dictionary having relatively few columns, these techniques can be effectively used for sparse hyperspectral unmixing [27].
The coherence of our expanded dictionary means we need to use different tools to find good solutions that satisfy our sparsity assumptions. We would like to use a variational approach as similar as possible to the NNLS model that enforces the additional sparsity while still allowing all the groups to collaborate. We propose adding nonconvex sparsity penalties to the NNLS objective function (1). We can apply these penalties separately to each group of coefficients to enforce intra group sparsity, and we can simultaneously apply them to the vector of all coefficients to enforce additional inter group sparsity. From a modeling perspective, the ideal sparsity penalty is . There is a very interesting recent work that directly deals with constraints and penalties via a quadratic penalty approach [28]. If the variational model is going to be nonconvex, we prefer to work with a differentiable objective when possible. We therefore explore the effectiveness of sparsity penalties based on the Hoyer measure [29, 30], which is essentially the ratio of and norms. In previous works, this has been successfully used to model sparsity in NMF and blind deconvolution applications [29, 31, 32]. We also consider the difference of and norms. By the relationship, , we see that while the ratio of norms is constant in radial directions, the difference increases moving away from the origin except along the axes. Since the Hoyer measure is twice differentiable on the nonnegative orthant away from the origin, it can be locally expressed as a difference of convex functions, and convex splitting or difference of convex (DC) methods [33] can be used to find a local minimum of the nonconvex problem. Some care must be taken, however, to deal with its poor behavior near the origin. It is even easier to apply DC methods when using  as a penalty, since this is already a difference of convex functions, and it is well defined at the origin.
The paper is organized as follows. In Section 2 we define the general model, describe the dictionary structure and show how to use both the ratio and the difference of and norms to model our intra and inter group sparsity assumptions. Section 3 derives a method for solving the general model, discusses connections to existing methods and includes convergence analysis. In Section 4 we discuss specific problem formulations for several examples related to DOAS analysis and hyperspectral demixing. Numerical experiments for comparing methods and applications to example problems are presented in Section 5.
2 Problem
For the nonnegative linear mixing model
, let , and with . Let the dictionary have normalized columns and consist of groups, each with elements. We can write and , where each and . The general nonnegative least squares problem with sparsity constraints that we will consider is(2) 
where
(3) 
The functions represent the intra group sparsity penalties applied to each group of coefficients , , and is the inter group sparsity penalty applied to . If is differentiable, then a necessary condition for to be a local minimum is given by
(4) 
For the applications we will consider, we want to constrain each vector to be at most 1sparse, which is to say that we want . To accomplish this through the model (2), we will need to choose the parameters to be sufficiently large.
The sparsity penalties and will either be the ratios of and norms defined by
(5) 
or they will be the differences defined by
(6) 
A geometric intuition for why minimizing promotes sparsity of is that since it is constant in radial directions, minimizing it tries to reduce without changing . As seen in Figure 1, sparser vectors have smaller norm on the sphere.
Neither or is differentiable at zero, and is not even continuous there. Figure 2 shows a visualization of both penalties in two dimensions.
To obtain a differentiable , we can smooth the sparsity penalties by replacing the norm with the Huber function, defined by the infimal convolution
(7) 
In this way we can define differentiable versions of sparsity penalties and by
(8)  
(9)  
These smoothed sparsity penalties are shown in Figure 3.
The regularized penalties behave more like near the origin and should tend to shrink that have small norms.
An alternate strategy for obtaining a differentiable objective that doesn’t require smoothing the sparsity penalties is to add
additional dummy variables and modify the convex constraint set. Let
, denote a vector of dummy variables. Consider applying to vectors instead of to . Then if we add the constraints , we are assured that will only be applied to nonzero vectors, even though is still allowed to be zero. Moreover, by requiring that , we can ensure that at least of the vectors have one or more nonzero elements. In particular, this prevents from being zero, so is well defined as well.The dummy variable strategy is our preferred approach for using the / penalty. The high variability of the regularized version near the origin creates numerical difficulties. It either needs a lot of smoothing, which makes it behave too much like , or its steepness near the origin makes it harder numerically to avoid getting stuck in bad local minima. For the  penalty, the regularized approach is our preferred strategy because it is simpler and not much regularization is required. Smoothing also makes this penalty behave more like near the origin, but a small shrinkage effect there may in fact be useful, especially for promoting inter group sparsity. These two main problem formulations are summarized below as Problem 1 and Problem 2 respectively.
Problem 1:
Problem 2:
3 Algorithm
Both Problems 1 and 2 from Section 2 can be written abstractly as
(10) 
where is a convex set. Problem 2 is already of this form with . Problem 1 is also of this form, with . Note that the objective function of Problem 1 can also be written as in (10) if we redefine as and consider an expanded vector of coefficients that includes the dummy variables, . The data fidelity term can still be written as if columns of zeros are inserted into at the indices corresponding to the dummy variables. In this section, we will describe algorithms and convergence analysis for solving (10) under either of two sets of assumptions.
Assumption 1.

is a convex set.

is coercive on in the sense that for any , is a bounded set. In particular, is bounded below.
Assumption 2.

is concave and differentiable on .

Same assumptions on and as in Assumption 1
Problem 1 satisfies Assumption 1 and Problem 2 satisfies Assumption 2. We will first consider the case of Assumption 1.
Our approach for solving (10) was originally motivated by a convex splitting technique from [34, 35] that is a semiimplicit method for solving when can be split into a sum of convex and concave functions , both in . Let be an upper bound on the eigenvalues of , and let be a lower bound on the eigenvalues of . Under the assumption that it can be shown that the update defined by
(11) 
doesn’t increase for any time step . This can be seen by using second order Taylor expansions to derive the estimate
(12) 
This convex splitting approach has been shown to be an efficient method much faster than gradient descent for solving phasefield models such as the CahnHilliard equation, which has been used for example to simulate coarsening [35]
and for image inpainting
[36].By the assumptions on , we can achieve a convex concave splitting, , by letting and for an appropriately chosen positive definite matrix . We can also use the fact that is quadratic to improve upon the estimate in (12) when bounding by a quadratic function of . Then instead of choosing a time step and updating according to (11), we can dispense with the time step interpretation altogether and choose an update that reduces the upper bound on as much as possible subject to the constraint. This requires minimizing a strongly convex quadratic function over .
Proposition 3.1.
Let Assumption 1 hold. Also let and be lower and upper bounds respectively on the eigenvalues of for . Then for and for any matrix ,
(13) 
Proof.
The estimate follows from combining several second order Taylor expansions of and with our assumptions. First expanding about and using to simplify notation, we get that
for some . Substituting as defined by (10), we obtain
(14) 
Similarly, we can compute Taylor expansions of about both and .
Again, both and are in . Adding these expressions implies that
From the assumption that the eigenvalues of are bounded above by on ,
(15) 
Adding and subtracting and to (14) yields
Using (15),
The assumption that the eigenvalues of are bounded below by on means
Since the estimate is unchanged by adding and subtracting for any matrix , the inequality in (13) follows directly. ∎
Corollary.
Let be symmetric positive definite and let denote the smallest eigenvalue of . If , then for ,
A natural strategy for solving (10) is then to iterate
(16) 
for chosen to guarantee a sufficient decrease in . The method obtained by iterating (16) can be viewed as an instance of scaled gradient projection [37, 38, 39] where the orthogonal projection of onto is computed in the norm . The approach of decreasing by minimizing an upper bound coming from an estimate like (13) can be interpreted as an optimization transfer strategy of defining and minimizing a surrogate function [40], which is done for related applications in [12, 13]. It can also be interpreted as a special case of difference of convex programming [33].
Choosing in such a way that guarantees may be numerically inefficient, and it also isn’t strictly necessary for the algorithm to converge. To simplify the description of the algorithm, suppose for some scalar and symmetric positive definite . Then as gets larger, the method becomes more like explicit gradient projection with small time steps. This can be slow to converge as well as more prone to converging to bad local minima. However, the method still converges as long as each is chosen so that the update decreases sufficiently. Therefore we want to dynamically choose to be as small as possible such that the update given by (16) decreases by a sufficient amount, namely
for some . Additionally, we want to ensure that the modulus of strong convexity of the quadratic objective in (16) is large enough by requiring the smallest eigenvalue of to be greater than or equal to some . The following is an algorithm for solving (10) and a dymamic update scheme for that is similar to Armijo line search but designed to reduce the number of times that the solution to the quadratic problem has to be rejected for not decreasing sufficiently.
5
Algorithm 1:
A Scaled Gradient Projection Method for Solving (10) Under Assumption 1
Define , , , , , and set .
while or
if
else
end if
end while
It isn’t necessary to impose an upper bound on in Algorithm 3 even though we want it to be bounded. The reason for this is because once , will be sufficiently decreased for any choice of , so is effectively bounded by .
Under Assumption 2 it is much more straightforward to derive an estimate analogous to Proposition 3.1. Concavity of immediately implies
Adding to this the expression
yields
(17) 
for . Moreover, the estimate still holds if we add to the right hand side for any positive semidefinite matrix . We are again led to iterating (16) to decrease , and in this case need only be included to ensure that is positive definite.
We can let since the dependence on is no longer necessary. We can choose any such that the smallest eigenvalue of is greater than , but it is still preferable to choose as small as is numerically practical.
5
Algorithm 2:
A Scaled Gradient Projection Method for Solving (10) Under Assumption 2
Define , symmetric positive definite and .
while or
(18) 
end while Since the objective in (18) is zero at , the minimum value is less than or equal to zero, and so by (17).
Algorithm 17 is also equivalent to iterating
which can be seen as an application of the simplified difference of convex algorithm from [33] to . The DC method in [33] is more general and doesn’t require the convex and concave functions to be differentiable.
With many connections to classical algorithms, existing convergence results can be applied to argue that limit points of the iterates of Algorithms 17 and 3 are stationary points of (10). We still choose to include a convergence analysis for clarity because our assumptions allow us to give a simple and intuitive argument. The following analysis is for Algorithm 3 under Assumption 1. However, if we replace with and with , then it applies equally well to Algorithm 17 under Assumption 2. We proceed by showing that the sequence is bounded, and limit points of are stationary points of (10) satisfying the necessary local optimality condition (4).
Lemma 3.2.
The sequence of iterates generated by Algorithm 3 is bounded.
Proof.
Since is nonincreasing, , which is a bounded set by assumption. ∎
Lemma 3.3.
Let be the sequence of iterates generated by Algorithm 3. Then .
Proof.
Since is bounded below and nonincreasing, it converges. By construction, satisfies
By the optimality condition for (16),
In particular, we can take , which implies
Thus
Since the eigenvalues of are bounded below by , we have that
The result follows from noting that
which equals since converges. ∎
Proposition 3.4.
Proof.
Let be a limit point of . Since is bounded, such a point exists. Let be a subsequence that converges to . Since , we also have that . Recalling the optimality condition for (16),
Following [37], proceed by taking the limit along the subsequence as .
since and is bounded. By continuity of we get that
∎
Each iteration requires minimizing a strongly convex quadratic function over the set as defined in (16). Many methods can be used to solve this, and we want to choose one that is as robust as possible to poor conditioning of . For example, gradient projection works theoretically and even converges at a linear rate, but it can still be impractically slow. A better choice here is to use the alternating direction method of multipliers (ADMM) [41, 42], which alternately solves a linear system involving and projects onto the constraint set. Applied to Problem 2, this is essentially the same as the application of split Bregman [43] to solve a NNLS model for hyperspectral demixing in [44]. We consider separately the application of ADMM to Problems 1 and 2. The application to Problem 2 is simpler.
For Problem 2, (16) can be written as
To apply ADMM, we can first reformulate the problem as
(19) 
where is an indicator function for the constraint defined by .
Introduce a Lagrange multiplier and define a Lagrangian
(20) 
and augmented Lagrangian
where . ADMM finds a saddle point
by alternately minimizing with respect to , minimizing with respect to and updating the dual variable . Having found a saddle point of , will be a solution to (19) and we can take to be the solution to (16). The explicit ADMM iterations are described in the following algorithm.
5
Algorithm 3:
ADMM for solving convex subproblem for Problem 2
Define , and arbitrarily and let .
while not converged
k = k + 1
end while
Here denotes the orthogonal projection onto the nonnegative orthant. For this application of ADMM to be practical, should not be too expensive to apply, and should be well chosen.
Since (16) is a standard quadratic program, a huge variety of other methods could also be applied. Variants of Newton’s method on a bound constrained KKT system might work well here, especially if we find we need to solve the subproblem to very high accuracy.
For Problem 2, (16) can be written as
Here, and represent the gradients with respect to and respectively. The matrix is assumed to be of the form , with a diagonal matrix. It is helpful to represent the constraints in terms of convex sets defined by
and indicator functions and for these sets.
Let and represent and . Then by adding splitting variables and we can reformulate the problem as
Adding Lagrange multipliers and for the linear constraints, we can define the augmented Lagrangian
Comments
There are no comments yet.