Norddeutsche Automobilfabrik

Die Norddeutsche Automobilfabrik AG war ein deutscher Automobilhersteller, der nur 1923 in Hamburg ansässig war.

Hergestellt wurde ein Kleinwagen unter dem Namen Nafa, der als „Das neue Kleinauto“ vermarktet wurde.

AAA | ABC | Adler | AGA | Alan | Alfi (1922–1925) | Alfi (1927–1928) | AMBAG | Amor | Anker | Apollo | Argeo | Arimofa | Atlantic | Audi | Auto-Ell | Badenia | Baer | BAW | BEB | Beckmann | Benz | Bergmann | Bergo | BF | Biene | Bleichert | BMW | Bob | Borcharding | Borgward | Bravo | Brennabor | Bufag | Bully | Butz | BZ | C. Benz Söhne | Certus | Club | Cockerell | Combi | Cyklon | Davidl | Dehn | DEW | Diabolo | Diana | Dinos | Dixi | DKW | Dorner | Dürkopp | Dux | D-Wagen | EBS | Ego | Ehrhardt | Ehrhardt-Szawe | Eibach | Electra | Elektric | Elite | Elitewagen | Eos | Erco | Espenlaub | Eubu | Exor | Fadag | Fafag | Fafnir | Falcon | Fama | Faun | Ferbedo | Ford | Fox | Framo | Freia | Fulmina | Garbaty | Gasi | Goliath | Görke | Grade | Gridi | Gries | Habag | HAG | HAG-Gastell | Hagea-Moto | Hanomag | Hansa | Hansa-Lloyd | Hascho | Hataz | Hawa | Heim | Helios | Helo | Hercules | Hero | Hildebrand | Hiller | Horch | HT | Imperia | Induhag | Ipe | Joswin | Juhö | Kaha | Kaiser | Keitel | Kenter | Kico | Kieling | Knöllner | Kobold | Koco | Komet | Komnick | Körting | Kühn | Landgrebe | Lauer | Leichtauto | Leifa | Lesshaft | Ley | Libelle | Lindcar | Lipsia | Loeb | Luther & Heyer | LuWe | Luwo | Lux | Macu | MAF | Magnet | Maier | Maja | Mannesmann | Martinette | Maurer | Mauser | Maybach | Mayrette | Mercedes | Mercedes-Benz | MFB | Mikromobil | Minimus | Möckwagen | Mölkamp | Moll | Monos | Mops | Morgan | Motobil | Motrix | Muvo | Nafa | NAG | NAG-Presto | NAG-Protos | Nawa | Neander | Neiman | Nemalette | Nowa | NSU | NSU-Fiat | Nufmobil | Nug | Omega | Omikron | Omnobil | Onnasch | Opel | Otto | Pawi | Pe-Ka | Peer Gynt | Pelikan | Peter & Moritz | Pfeil | Phänomen | Pilot | Pluto | Presto | Priamus | Protos | Rabag | Remag | Renfert | Rex-Simplex | Rhemag | Rikas | Rivo | Röhr | Roland | Rollfix | Rumpler | Rüttger | RWN | Sablatnig-Beuchelt | Sauer | SB | Schebera | Schönnagel | Schuricht | Schütte-Lanz | Seidel-Arop | Selve | SHW | Simson | Slaby-Beringer | Slevogt | Solomobil | Sperber | Sphinx | Spinell | Staiger | Standard | Steiger | Stoewer | Stolle | Sun | Szawe | Tamag | Tamm | Tatra | Teco | Tempo | Theis | Tornax | Tourist | Traeger | Trinks | Trippel | Triumph | Turbo | Utilitas | VL | Voran | VW | Walmobil | Wanderer | Wegmann | Weise | Wesnigk | Westfalia | Winkler | Wittekind | York | Zetgelette | Zündapp | Zwerg

Matrix completion

Matrix completion is the task of filling in the missing entries of a partially observed matrix. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry





(


i


,


j


)




{\displaystyle (i,j)}


represents the rating of movie





j




{\displaystyle j}


by customer





i




{\displaystyle i}


if customer





i




{\displaystyle i}


has watched movie





j




{\displaystyle j}


and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the term-document matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.

Without any restrictions on the number of degrees of freedom in the completed matrix this problem is underdetermined since the hidden entries could be assigned arbitrary values. Thus matrix completion often seeks to find the lowest rank matrix or, if the rank of the completed matrix is known, a matrix of rank





r




{\displaystyle r}


that matches the known entries. The illustration shows that a partially revealed rank-1 matrix (on the left) can be completed with zero-error (on the right) since all the rows with missing entries should be the same as the third row. In the case of the Netflix problem the ratings matrix is expected to be low-rank since user preferences can often be described by a few factors, such as the movie genre and time of release. Other applications include computer vision, where missing pixels in images need to be reconstructed, detecting the global positioning of sensors in a network from partial distance information, and multiclass learning. The matrix completion problem is in general NP-hard, but there are tractable algorithms that achieve exact reconstruction with high probability.

In statistical learning point of view, the matrix completion problem is an application of matrix regularization which is a generalization of vector regularization. For example, in the low-rank matrix completion problem one may apply the regularization penalty taking the form of a nuclear norm





R


(


X


)


=


λ







X
















{\displaystyle R(X)=\lambda \|X\|_{*}}


One of the variants of the matrix completion problem is to find the lowest rank matrix





X




{\displaystyle X}


which matches the matrix





M




{\displaystyle M}


, which we wish to recover, for all entries in the set





E




{\displaystyle E}


of observed entries.The mathematical formulation of this problem is as follows:












min


X







rank



(


X


)








subject to






X



i


j




=



M



i


j












i


,


j






E








{\displaystyle {\begin{aligned}&{\underset {X}{\text{min}}}&{\text{rank}}(X)\\&{\text{subject to}}&X_{ij}=M_{ij}&\;\;\forall i,j\in E\\\end{aligned}}}


Candes and Recht proved that with assumptions on the sampling of the observed entries and sufficiently many sampled entries this problem has a unique solution with high probability.

An equivalent formulation, given that the matrix





M




{\displaystyle M}


to be recovered is known to be of rank





r




{\displaystyle r}


, is to solve for





X




{\displaystyle X}


where






X



i


j




=



M



i


j










i


,


j






E




{\displaystyle X_{ij}=M_{ij}\;\;\forall i,j\in E}


A number of assumptions on the sampling of the observed entries and the number of sampled entries are frequently made to simplify the analysis and to ensure the problem is not underdetermined.

To make the analysis tractable, it is often assumed that the set





E




{\displaystyle E}


of observed entries and fixed cardinality is sampled uniformly at random from the collection of all subsets of entries of cardinality






|



E



|





{\displaystyle |E|}


. To further simplify the analysis, it is instead assumed that





E




{\displaystyle E}


is constructed by Bernoulli sampling, i.e. that each entry is observed with probability





p




{\displaystyle p}


. If





p




{\displaystyle p}


is set to







N



m


n







{\displaystyle {\frac {N}{mn}}}


where





N




{\displaystyle N}


is the desired expected cardinality of





E




{\displaystyle E}


, and





m


,



n




{\displaystyle m,\;n}


are the dimensions of the matrix (let





m


<


n




{\displaystyle m<n}


without loss of generality),






|



E



|





{\displaystyle |E|}


is within





O


(


n


log






n


)




{\displaystyle O(n\log n)}


of





N




{\displaystyle N}


with high probability, thus Bernoulli sampling is a good approximation for uniform sampling. Another simplification is to assume that entries are sampled independently and with replacement.

Suppose the





m




{\displaystyle m}


by





n




{\displaystyle n}


matrix





M




{\displaystyle M}


(with





m


<


n




{\displaystyle m<n}


) we are trying to recover has rank





r




{\displaystyle r}


. There is an information theoretic lower bound on how many entries must be observed before





M




{\displaystyle M}


can be uniquely reconstructed. Firstly, the number of degrees of freedom of a matrix of rank





r




{\displaystyle r}


is





2


n


r







r



2






{\displaystyle 2nr-r^{2}}


. This can be shown by looking at the Singular Value Decomposition of the matrix and counting the degrees of freedom. Then at least





2


n


r







r



2






{\displaystyle 2nr-r^{2}}


entries must be observed for matrix completion to have a unique solution.

Secondly, there must be at least one observed entry per row and column of





M




{\displaystyle M}


. The Singular Value Decomposition of





M




{\displaystyle M}


is given by





U


Σ




V











{\displaystyle U\Sigma V^{\dagger }}


. If row





i




{\displaystyle i}


is unobserved, it is easy to see the






i



th






{\displaystyle i^{\text{th}}}


right singular vector of





M




{\displaystyle M}


,






v



i






{\displaystyle v_{i}}


, can be changed to some arbitrary value and still yield a matrix matching





M




{\displaystyle M}


over the set of observed entries. Similarly, if column





j




{\displaystyle j}


is unobserved, the






j



th






{\displaystyle j^{\text{th}}}


left singular vector of





M




{\displaystyle M}


,






u



i






{\displaystyle u_{i}}


can be arbitrary. If we assume Bernoulli sampling of the set of observed entries, the Coupon collector effect implies that entries on the order of





O


(


n


log






n


)




{\displaystyle O(n\log n)}


must be observed to ensure that there is an observation from each row and column with high probability.

Combining the necessary conditions and assuming that





r






m


,


n




{\displaystyle r\ll m,n}


(a valid assumption for many practical applications), the lower bound on the number of observed entries required to prevent the problem of matrix completion from being underdetermined is on the order of





n


r


log






n




{\displaystyle nr\log n}


.

The concept of incoherence arose in compressed sensing. It is introduced in the context of matrix completion to ensure the singular vectors of





M




{\displaystyle M}


are not too „sparse“ in the sense that all coordinates of each singular vector are of comparable magnitude instead of just a few coordinates having significantly larger magnitudes. The standard basis vectors are then undesirable as singular vectors, and the vector







1



n







[





1






1














1





]






{\displaystyle {\frac {1}{\sqrt {n}}}{\begin{bmatrix}1\\1\\\vdots \\1\end{bmatrix}}}


in







R




n






{\displaystyle \mathbb {R} ^{n}}


is desirable. As an example of what could go wrong if the singular vectors are sufficiently „sparse“, consider the





m




{\displaystyle m}


by





n




{\displaystyle n}


matrix







[





1




0










0





















0




0




0




0





]






{\displaystyle {\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}}


with singular value decomposition






I



m






[





1




0










0





















0




0




0




0





]





I



n






{\displaystyle I_{m}{\begin{bmatrix}1&0&\cdots &0\\\vdots &&\vdots \\0&0&0&0\end{bmatrix}}I_{n}}


. Almost all the entries of





M




{\displaystyle M}


must be sampled before it can be reconstructed.

Candes and Recht define the coherence of a matrix





U




{\displaystyle U}


with column space an





r








{\displaystyle r-}


dimensional subspace of







R




n






{\displaystyle \mathbb {R} ^{n}}


as





μ



(


U


)


=




n


r





max



i


<


n









P



U





e



i










2






{\displaystyle \mu (U)={\frac {n}{r}}\max _{i<n}\|P_{U}e_{i}\|^{2}}


, where






P



U






{\displaystyle P_{U}}


is the orthogonal projection onto





U




{\displaystyle U}


. Incoherence then asserts that given the singular value decomposition





U


Σ




V











{\displaystyle U\Sigma V^{\dagger }}


of the





m




{\displaystyle m}


by





n




{\displaystyle n}


matrix





M




{\displaystyle M}


,

for some






μ




0




,




μ




1






{\displaystyle \mu _{0},\;\mu _{1}}


.

In real world application, one often observe only a few entries corrupted at least by a small amount of noise. For example, in the Netflix problem, the ratings are uncertain. Candes and Plan showed that it is possible to fill in the many missing entries of large low-rank matrices from just a few noisy samples by nuclear norm minimization. The noisy model assumes that we observe






Y



i


j




=



M



i


j




+



Z



i


j




,


(


i


,


j


)






Ω



,




{\displaystyle Y_{ij}=M_{ij}+Z_{ij},(i,j)\in \Omega ,}


where







Z



i


j




:


(


i


,


j


)






Ω






{\displaystyle {Z_{ij}:(i,j)\in \Omega }}


is a noise term. Note that the noise can be either stochastic or deterministic. Alternatively the model can be expressed as






P



Ω





(


Y


)


=



P



Ω





(


M


)


+



P



Ω





(


Z


)


,




{\displaystyle P_{\Omega }(Y)=P_{\Omega }(M)+P_{\Omega }(Z),}


where





Z




{\displaystyle Z}


is an





n


×



n




{\displaystyle n\times n}


matrix with entries






Z



i


j






{\displaystyle Z_{ij}}


for





(


i


,


j


)






Ω





{\displaystyle (i,j)\in \Omega }


assuming that










P



Ω





(


Z


)








F








δ





{\displaystyle \|P_{\Omega }(Z)\|_{F}\leq \delta }


for some





δ



>


0




{\displaystyle \delta >0}













min


X











X




















subject to











P



Ω





(


X






Y


)








F








δ









{\displaystyle {\begin{aligned}&{\underset {X}{\text{min}}}&\|X\|_{*}\\&{\text{subject to}}&\|P_{\Omega }(X-Y)\|_{F}\leq \delta \\\end{aligned}}}


Among all matrices consistent with the data, find the one with minimum nuclear norm. Candes and Plan have shown that this reconstruction is accurate. They have proved that when perfect noiseless recovery occurs, then matrix completion is stable vis a vis perturbations. The error is proportional to the noise level





δ





{\displaystyle \delta }


. Therefore when the noise level is small, the error is small. Here the matrix completion problem does not obey the restricted isometry property (RIP). For matrices, the RIP would assume that the sampling operator obeys





(


1






δ



)






X








F




2










1


p









P



Ω





(


X


)








F




2








(


1


+


δ



)






X








F




2






{\displaystyle (1-\delta )\|X\|_{F}^{2}\leq {\frac {1}{p}}\|P_{\Omega }(X)\|_{F}^{2}\leq (1+\delta )\|X\|_{F}^{2}}


for all matrices





X




{\displaystyle X}


with sufficiently small rank and





δ



<


1




{\displaystyle \delta <1}


sufficiently small. The methods are also applicable to sparse signal recovery problems in which the RIP does not hold.

The high rank matrix completion in general is NP-Hard. However, with certain assumptions, some incomplete high rank even full rank matrix can be completed.

Eriksson, Balzano and Nowak have considered the problem of completing a matrix with the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. Since the columns belong to a union of subspaces, the problem may be viewed as a missing-data version of the subspace clustering problem. Let





X




{\displaystyle X}


be an





n


×



N




{\displaystyle n\times N}


matrix whose (complete) columns lie in a union of at most





k




{\displaystyle k}


subspaces, each of





r


a


n


k






r


<


n




{\displaystyle rank\leq r<n}


, and assume





N






k


n




{\displaystyle N\gg kn}


. Eriksson, Balzano and Nowak showed that under mild assumptions each column of





X




{\displaystyle X}


can be perfectly recovered with high probability from an incomplete version so long as at least





C


r


N



log



2








(


n


)




{\displaystyle CrN\log ^{2}(n)}


entries of





X




{\displaystyle X}


are observed uniformly at random, with





C


>


1




{\displaystyle C>1}










M
















{\displaystyle \|M\|_{*}}


(which gives the sum of the singular values of





M




{\displaystyle M}


instead of






rank



(


M


)




{\displaystyle {\text{rank}}(M)}


(which counts the number of non zero singular values of





M




{\displaystyle M}


). This is analogous to minimizing the L1-norm rather than the L0-norm for vectors. The convex relaxation can be solved using semidefinite programming (SDP) by noticing that the optimization problem is equivalent to












min




W



1




,



W



2











trace



(



W



1




)


+



trace



(



W



2




)








subject to







X



i


j




=



M



i


j










i


,


j






E











[






W



1






X







X












W



2







]








0








{\displaystyle {\begin{aligned}&{\underset {W_{1},W_{2}}{\text{min}}}&&{\text{trace}}(W_{1})+{\text{trace}}(W_{2})\\&{\text{subject to}}&&X_{ij}=M_{ij}\;\;\forall i,j\in E\\&&&{\begin{bmatrix}W_{1}&X\\X^{\dagger }&W_{2}\end{bmatrix}}\succeq 0\end{aligned}}}


The complexity of using SDP to solve the convex relaxation is





O


(



max



(


m


,


n



)



4




)




{\displaystyle O({\text{max}}(m,n)^{4})}


. State of the art solvers like SDP3 can only handle matrices of size up to 100 by 100 An alternative first order method that approximately solves the convex relaxation is the Singular Value Thresholding Algorithm introduced by Cai, Candes and Shen.

Candes and Recht show, using the study of random variables on Banach spaces, that if the number of observed entries is on the order of





max



{



μ




1




2




,





μ




0







μ




1




,



μ




0





n



0.25




}



n


r


log






n




{\displaystyle \max {\{\mu _{1}^{2},{\sqrt {\mu _{0}}}\mu _{1},\mu _{0}n^{0.25}\}}nr\log n}


(assume without loss of generality





m


<


n




{\displaystyle m<n}


), the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability





1








c



n



3








{\displaystyle 1-{\frac {c}{n^{3}}}}


for some constant





c




{\displaystyle c}


. If the rank of





M




{\displaystyle M}


is small (





r









n



0.2





μ




0








{\displaystyle r\leq {\frac {n^{0.2}}{\mu _{0}}}}


), the size of the set of observations reduces to the order of






μ




0





n



1.2




r


log






n




{\displaystyle \mu _{0}n^{1.2}r\log n}


. These results are near optimal, since the minimum number of entries that must be observed for the matrix completion problem to not be underdetermined is on the order of





n


r


log






n




{\displaystyle nr\log n}


.

This result has been improved by Candes and Tao. They achieve bounds that differ from the optimal bounds only by polylogarithmic factors by strengthening the assumptions. Instead of the incoherence property, they assume the strong incoherence property with parameter






μ




3






{\displaystyle \mu _{3}}


. This property states that:

Intuitively, strong incoherence of a matrix





U




{\displaystyle U}


asserts that the orthogonal projections of standard basis vectors to





U




{\displaystyle U}


has magnitudes that have high likelihood if the singular vectors were distributed randomly.

Candes and Tao find that when





r




{\displaystyle r}


is





O


(


1


)




{\displaystyle O(1)}


and the number of observed entries is on the order of






μ




3




4




n


(


log






n



)



2






{\displaystyle \mu _{3}^{4}n(\log n)^{2}}


, the rank minimization problem has a unique solution which also happens to be the solution of its convex relaxation with probability





1








c



n



3








{\displaystyle 1-{\frac {c}{n^{3}}}}


for some constant





c




{\displaystyle c}


. For arbitrary





r




{\displaystyle r}


, the number of observed entries sufficient for this assertion hold true is on the order of






μ




3




2




n


r


(


log






n



)



6






{\displaystyle \mu _{3}^{2}nr(\log n)^{6}}


Keshavan, Montanari and Oh consider a variant of matrix completion where the rank of the





m




{\displaystyle m}


by





n




{\displaystyle n}


matrix





M




{\displaystyle M}


, which is to be recovered, is known to be





r




{\displaystyle r}


. They assume Bernoulli sampling of entries, constant aspect ratio







m


n






{\displaystyle {\frac {m}{n}}}


, bounded magnitude of entries of





M




{\displaystyle M}


(let the upper bound be






M



max






{\displaystyle M_{\text{max}}}


), and constant condition number








σ




1





σ




r








{\displaystyle {\frac {\sigma _{1}}{\sigma _{r}}}}


(where






σ




1






{\displaystyle \sigma _{1}}


and






σ




r






{\displaystyle \sigma _{r}}


are the largest and smallest singular values of





M




{\displaystyle M}


respectively). Further, they assume the two incoherence conditions are satisfied with






μ




0






{\displaystyle \mu _{0}}


and






μ




1







σ




1





σ




r








{\displaystyle \mu _{1}{\frac {\sigma _{1}}{\sigma _{r}}}}


where






μ




0






{\displaystyle \mu _{0}}


and






μ




1






{\displaystyle \mu _{1}}


are constants. Let






M



E






{\displaystyle M^{E}}


be a matrix that matches





M




{\displaystyle M}


on the set





E




{\displaystyle E}


of observed entries and is 0 elsewhere. They then propose the following algorithm:

Steps 1 and 2 of the algorithm yield a matrix






Tr



(



M



E




)




{\displaystyle {\text{Tr}}(M^{E})}


very close to the true matrix





M




{\displaystyle M}


(as measured by the root mean square error (RMSE) with high probability. In particular, with probability





1








1



n



3








{\displaystyle 1-{\frac {1}{n^{3}}}}


,







1



m


n



M



max




2











M







Tr



(



M



E




)








F




2








C




r



m



|



E



|









m


n







{\displaystyle {\frac {1}{mnM_{\text{max}}^{2}}}\|M-{\text{Tr}}(M^{E})\|_{F}^{2}\leq C{\frac {r}{m|E|}}{\sqrt {\frac {m}{n}}}}


for some constant





C




{\displaystyle C}


.



















F






{\displaystyle \|\cdot \|_{F}}


denotes the Frobenius norm. Note that the full suite of assumptions is not needed for this result to hold. The incoherence condition, for example, only comes into play in exact reconstruction. Finally, although trimming may seem counter intuitive as it involves throwing out information, it ensures projecting






M



E






{\displaystyle M^{E}}


onto its first





r




{\displaystyle r}


principal components gives more information about the underlying matrix





M




{\displaystyle M}


than about the observed entries.

In Step 3, the space of candidate matrices





X


,



Y




{\displaystyle X,\;Y}


can be reduced by noticing that the inner minimization problem has the same solution for





(


X


,


Y


)




{\displaystyle (X,Y)}


as for





(


X


Q


,


Y


R


)




{\displaystyle (XQ,YR)}


where





Q




{\displaystyle Q}


and





R




{\displaystyle R}


are orthonormal





r




{\displaystyle r}


by





r




{\displaystyle r}


matrices. Then gradient descent can be performed over the cross product of two Grassman manifolds. If





r






m


,



n




{\displaystyle r\ll m,\;n}


and the observed entry set is in the order of





n


r


log






n




{\displaystyle nr\log n}


, the matrix returned by Step 3 is exactly





M




{\displaystyle M}


. Then the algorithm is order optimal, since we know that for the matrix completion problem to not be underdetermined the number of entries must be in the order of





n


r


log






n




{\displaystyle nr\log n}


.

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix problem. In the alternating minimization approach, the low-rank target matrix is written in a bilinear form:





X


=


U



V











{\displaystyle X=UV^{\dagger }}


;

the algorithm then alternates between finding the best





U




{\displaystyle U}


and the best





V




{\displaystyle V}


. While the overall problem is non-convex, each sub-problem is typically convex and can be solved efficiently. Jain, Netrapalli and Sanghavi have given one of the first guarantees for performance of alternating minimization for both matrix completion and matrix sensing.

The alternating minimization algorithm can be viewed as an approximate way to solve the following non-convex problem:












min



U


,


V








R




n


×



k















P



Ω





(


U



V









)







P



Ω





(


M


)








F




2










{\displaystyle {\begin{aligned}&{\underset {U,V\in \mathbb {R} ^{n\times k}}{\text{min}}}&\|P_{\Omega }(UV^{\dagger })-P_{\Omega }(M)\|_{F}^{2}\\\end{aligned}}}


The AltMinComplete Algorithm proposed by Jain, Netrapalli and Sanghavi is listed here:

They showed that by observing






|



Ω




|



=


O


(


(





σ




1











σ




k













)



6





k



7




log






n


log






(


k






M








F





/



ϵ



)


)




{\displaystyle |\Omega |=O(({\frac {\sigma _{1}^{*}}{\sigma _{k}^{*}}})^{6}k^{7}\log n\log(k\|M\|_{F}/\epsilon ))}


random entries of an incoherent matrix





M




{\displaystyle M}


, AltMinComplete algorithm can recover





M




{\displaystyle M}


in





O


(


log






(


1



/



ϵ



)


)




{\displaystyle O(\log(1/\epsilon ))}


steps. In terms of sample complexity (






|



Ω




|





{\displaystyle |\Omega |}


), theoretically, Alternating Minimization may require a bigger





Ω





{\displaystyle \Omega }


than Convex Relaxation. However empirically it seems not the case which implies that the sample complexity bounds can be further tightened. In terms of time complexity, they showed that AltMinComplete needs time





O


(



|



Ω




|




k



2




log






(


1



/



ϵ



)


)




{\displaystyle O(|\Omega |k^{2}\log(1/\epsilon ))}


.

It is worth noting that, although convex relaxation based methods have rigorous analysis, alternating minimization based algorithms are more successful in practice.

Several applications of matrix completions is summarized by Candes and Plan as follows:

Collaborative filtering is the task of making automatic predictions about the interests of a user by collecting taste information from many users. Companies like Apple, Amazon, Barnes and Noble, and Netflix are trying to predict their user preferences from partial knowledge. In these kind of matrix completion problem, the unknown full matrix is often considered low rank because only a few factors typically contribute to an individual’s tastes or preference.

In control, one would like to fit a discrete-time linear time-invariant state-space model









x


(


t


+


1


)





=


A


x


(


t


)


+


B


u


(


t


)






y


(


t


)





=


C


x


(


t


)


+


D


u


(


t


)








{\displaystyle {\begin{aligned}x(t+1)&=Ax(t)+Bu(t)\\y(t)&=Cx(t)+Du(t)\end{aligned}}}


to a sequence of inputs





u


(


t


)








R




m






{\displaystyle u(t)\in \mathbb {R} ^{m}}


and outputs





y


(


t


)








R




p




,


t


=


0


,






,


N




{\displaystyle y(t)\in \mathbb {R} ^{p},t=0,\ldots ,N}


. The vector





x


(


t


)








R




n






{\displaystyle x(t)\in \mathbb {R} ^{n}}


is the state of the system at time





t




{\displaystyle t}


and





n




{\displaystyle n}


is the order of the system model. From the input/output pair, one would like to recover the matrices





A


,


B


,


C


,


D




{\displaystyle A,B,C,D}


and the initial state





x


(


0


)




{\displaystyle x(0)}


. This problem can also be view as a low-rank matrix completion problem.

The global positioning problem emerges naturally in sensor networks. The problem is to recover the global positioning of points in Euclidean space from a local or partial set of pairwise distances. Thus it is a matrix completion problem with rank two if the sensors are located in a 2-D plane and three if they are in a 3-D space.

Steve Heinze

Stephen Herbert Heinze (born January 30, 1970) is a former National Hockey League right winger. He was drafted in the third round, 60th overall, by the Boston Bruins in the 1988 NHL Entry Draft. Heinze was born in Lawrence, Massachusetts, but grew up in North Andover, Massachusetts.

Heinze played three seasons for Boston College, where he, David Emma, and Marty McInnis formed the „HEM“ Line. Heinze, Emma, and McInnis finished first, second, and third, respectively, in the 1989–90 Hockey East scoring race. Heinze played for the 1992 U.S. Olympic hockey team and signed a multiyear contract with the Boston Bruins on March 6, 1992, following the Olympic games. After nine seasons with the Bruins, he joined the Columbus Blue Jackets for the 2000–01 season. The Blue Jackets traded him to the Buffalo Sabres at that season’s trade deadline. He then joined the Los Angeles Kings as a free agent before the 2001–02 season, and played the final two seasons of his career there.

Because of his last name, Heinze requested to wear #57 (as in Heinz 57 ketchup) with the Bruins. However, the Bruins denied his request, citing they felt his surname and number combination would be viewed as an advertising gimmick for the condiment. Instead, Heinze wore #23 in Boston. He was granted #57 when he joined the Blue Jackets and he wore it for the remainder of his NHL career.

In his NHL career, Heinze appeared in 694 games. He scored 178 goals and added 158 assists. He also appeared in 69 NHL playoff games, scoring 11 goals and adding 15 assists.

Fachbodenregal

Ein Fachbodenregal ist ein Regal, bei dem die jeweilige Lagerung auf so genannten „Fachböden“ erfolgt. Den Namen bezieht dieser älteste aller Regaltypen somit von seinem Lastaufnahmemittel, dem Fachboden. Fachbodenregale bestehen i. d. R. aus serienmäßig hergestellten Bauteilen wie Regalstützen, tragenden Seitenwänden, Fachböden, Versteifungselementen usw. Fachbodenregale werden in erster Linie dort eingesetzt, wo die Bedienung manuell erfolgt. Daraus resultiert eine Standardhöhe von circa zwei Metern. Zunächst konnte durch Einsatz von Leitern diese Höhe auf circa drei Meter erhöht werden. Wenn die Raumhöhe es zulässt, werden mehrere circa zwei Meter hohe Regaleinheiten zu Geschossanlagen übereinander gesetzt.

Durch moderne technische Geräte (Regalbediengeräte, Kommissioniergeräte u. ä.) können Fachbodenregale heute auch über 15 Meter hoch sein. Die Bedienung kann dann auch noch manuell vom Gerät aus erfolgen.

Fachbodenregale gehören zu den am weit verbreitetsten Lagersystemen für Kommissionieraufgaben. Sie sind in allen Bereichen von Wirtschaft, Handel und Verwaltung bis hin zum Einsatz im privaten Haushalt und Keller eingesetzt. Der bevorzugte Werkstoff für die Regalbauteile ist Stahl.

Swirlies

Swirlies is an indie rock band formed in Boston in 1990. They have often been compared to My Bloody Valentine, and are sometimes referred to as shoegaze musicians.

Guitarists Seana Carmody and Damon Tutunjian met each other in Spring 1990 when they joined a Go-Go’s cover band formed in Cambridge, Massachusetts. Tutunjian, Carmody, and drummer Jason Fitzpatrick learned two songs before abandoning their original objective in favor of writing originals. Under the name Raspberry Bang, the group released one song on a 7″ compilation to benefit animal rights.

In November 1990, Tutunjian’s high school friend Andy Bernick was enlisted to play bass and MIT student Ben Drucker was recruited on drums. The band began writing and recording songs characterized by shifting tempos, loud vibrato guitars played through numerous effects pedals, Tutunjian and Carmody’s melodic vocal interplay, and occasional bursts of screaming and other noise. They completed their first 4-track demo in December 1990, and played their first show on January 25, 1991. Because of the band’s practice of alternate guitar tunings, Bernick took to playing tapes or static from an old AM radio to fill time while Carmody and Tutunjian adjusted their guitars. This lo-fi sound would later spill over between songs on the band’s recorded work.

In 1991 Swirlies made some 8-track home recordings, which saw issue as the band’s first single „Didn’t Understand,“ first self-released as a cassette and then as a 7″ record by Slumberland Records. A split double-single with Boston noise rock band Kudgel followed, and the group entered the studio for a single and compilation tracks for Boston’s Pop Narcotic label.

In 1992 the band signed to Taang! Records and released the eight-song mini-album What To Do About Them culled from a mix of previously available and unreleased home and studio recordings. Musician/cartoonist Ron Regé, Jr. contributed artwork to the album’s cover as well as lo-fi recordings that were woven into the record’s sequence. The band also set to work recording their first LP, around which time shifts in Swirlies‘ personnel began to occur. Ben Drucker only drummed on a third of the new album’s studio tracks, and for the remaining songs his parts were handed over to a pair of session drummers. Andy Bernick departed to pursue ornithology for the academic year and Damon’s former roommate Morgan Andrews filled in on bass guitar and other noises. It was this line-up that toured to support the new album, Blonder Tongue Audio Baton, and appeared in the video for its lead track, „Bell“. Named for an obscure piece of vintage musical equipment, Blonder Tongue Audio Baton made use of Mellotron, Moog, and other analogue artifacts that the group had unearthed in the studio. During sequencing the band threw in numerous lo-fi compositions, soundbites, and rants, and collaged together an album jacket from arrays of found images and objects that matched the album’s eclectic aesthetics. Hailed for melding „the high waters of shoegaze creativity and the mounting currents of indie rock“, Blonder Tongue Audio Baton quickly rose to prominence in the American noise pop canon.

After a year of birding, Bernick returned and Swirlies enjoyed a brief period of performing as a quintet made up of two guitars, two bass guitars and a drum kit until Andrews left in 1993 to become a radio DJ. Later that year Swirlies released the Brokedick Car EP that had remixes of songs from Blonder Tongue alongside a couple of outtakes. Ben Drucker was soon replaced by Anthony DeLuca on drums, and the band made their first trip to Europe in January, 1994 where they recorded for The John Peel Show. After the tour, Seana Carmody left Swirlies to lead the pop group Syrup USA.

Christina Files joined Swirlies on guitar, synths, and vocals in early 1994, and that summer the band began work on their next album. After being branded both „shoegaze“‚ and „chimp rock“ early in their career, the band was exploring new musical directions and invented a name for their ethos, emblazoning it in the title of their 1995 EP, Sneaky Flutes and Sneaky Flute Music. The seven-song mini-album was a forerunner to their full-length masterwork, 1996’s They Spent Their Wild Youthful Days in the Glittering World of the Salons. The album added more synth-driven electronica and a few dance beats to their foundations in angular noise pop, drawing some reasonable comparisons to Stereolab and their Krautrock forbearers.

More member changes occurred during this time: In 1995 DeLuca left and Gavin McCarthy manned the drum kit for two U.S. tours before moving on to work in his own group Karate. Swirlies briefly played as a trio before Adam Pierce stepped in to play drums. Files left Swirlies to play with Victory at Sea and was replaced in 1997 by guitarist Rob Laakso. In 1998 the group remixed songs from the Salons sessions for the album Strictly East Coast Sneaky Flute Music featuring collaborations with producer Rich Costey, DJ Spooky, Soul Slinger, Mice Parade, various friends of the band, plus an abundance of field recordings as had been tradition on other Swirlies releases.

Swirles continued as a four piece under the Damon-Rob-Andy-Adam arrangement into the new millennium. Swirlies offshoot The Yes Girls (the core lineup but with Lavender Diamond’s Ron Regé on drums) toured the U.S. with Timonium, and in Denmark as an opener for Mew on their Half the World is Watching Me tour. The endeavor culminated in the release of the home-made album Damon Andy Rob Ron: The Yes Girls in 2000 on Sneaky Flute Empire/Pehr and a limited edition live album on Sneaky Flute Empire.

The band began to settle into being an outfit with a cast of guest musicians who revolved in and out of the group to accommodate other members‘ academic, career, and family commitments: Seana Carmody, Vanessa Downing, and Damon’s sister Kara Tutunjian often joined onstage for live vocals, Mike Walker and Tarquin Katis occasionally sat in for Bernick on bass, as well as Ken Bernard and Kevin Shea for Adam Pierce on drums. Deborah Warfield joined the band as vocalist circa 2000, also playing guitar and keyboards live.

Swirlies eventually released Cats of the Wild, Vol. 2 on Bubblecore Records in 2003, collecting new material alongside revamped studio versions of Yes Girls songs, plus outtakes from Swirlies‘ early 90s recordings.

The group, though sometimes sparse in regards to output, has never disbanded. They played a few shows in the northeastern U.S. in 2009 and 2011, toured the Eastern Seaboard and Midwest in 2013 with Kurt Vile, with Elliott Malvas on bass in lieu of Andy, and in 2015 Swirlies co-founder Seana Carmody rejoined the group for a two-week tour of the eastern U.S. and Canada to commemorate the band’s 25th year of existence. In July 2016, Joyful Noise Recordings released a new Swirlies song (Fantastic Trumpets Forever) on flexi disc. Also in 2016, Taang! Records is planning to reissue the Blonder Tongue Audio Baton LP on vinyl.

Since their earliest demo tape, each Swirlies appearance has been marked with „This is Swirles number __“ in order of its release. In addition to their „official“ releases, Swirlies have produced a number of cassingles, CDs, and free downloadable albums on their own Sneaky Flute Empire label, including a rock opera addressing harbored feelings of enmity towards Meg Zamula, a former writer for Pitchfork Media.

Andy Bernick maintains a label and website called Richmond County Archives that publishes material from the band’s periphery. The site also serves as Swirlies‘ official web presence.

A Swirlies tribute album, Sneaky Flute Moods: A Tribute to the Swirlies, was released online by Reverse Engine in April 2012.

Singer/guitarist Damon Tutunjian produced Mew’s debut album and has performed in some capacity on all of their albums. In 2013 he joined Swedish band I Am Super Ape on bass guitar and synth and produced their latest single „Monki“, featuring Mark Lanegan on vocals.

Bernick and Tutunjian have been with the group for its entire duration while other members have come and gone, sometimes coming back again.

Swirlies are mentioned in the 2016 novel, Our Noise by Jeff Gomez, appearing in a fictitious record review written by the book’s protagonist.

Situation de compétition

Une situation de compétition (ou situation de concurrence, accès concurrent, concurrence critique, course critique, séquencement critique ; race condition en anglais, littéralement « situation de course »), est un défaut dans un système caractérisé par un résultat différent selon l’ordre dans lequel agissent les acteurs du système.

Une situation de compétition peut survenir dès que plusieurs acteurs tentent d’accéder au même moment à une ressource partagée (fichier, imprimante, etc.) et qu’au moins l’un d’entre eux est susceptible de modifier son état. Cette définition implique que les systèmes dont les ressources partagées sont immuables (dont l’état ne peut pas changer) sont immunisés contre ce problème.

Les situations de compétition sont des problèmes particulièrement difficiles à identifier et à corriger puisqu’ils ne surviennent que suite à l’ordonnancement particulier et difficilement reproductible d’une séquence d’événements.

Une situation de compétition peut survenir dans un logiciel multitâche lorsque des données sont partagées sans précautions entre plusieurs tâches. Prenons l’exemple d’un système industriel qui comptabilise la production d’une machine. À chaque fois qu’une pièce est fabriquée, la routine ci-dessous est exécutée :

Cette routine incrémente le nombre total de pièces fabriquées. Mais si d’une manière ou d’une autre le logiciel est susceptible de réagir à la production concurrente de deux pièces, une situation de compétition survient. En effet, bien que sur une seule ligne de code, l’instruction qui modifie le nombre total de pièces n’est pas atomique et se décompose en instructions élémentaires de chargement, incrémentation et stockage :

Ainsi, quand deux pièces sont fabriquées à peu près simultanément, la séquence des instructions peut être la suivante :

CHARGER accumulateur ← total_pièces

INCRÉMENTER accumulateur

CHARGER accumulateur ← total_pièces

INCRÉMENTER accumulateur

STOCKER accumulateur → total_pièces

STOCKER accumulateur → total_pièces

En déroulant les instructions dans ces conditions — même en supposant les deux accumulateurs distincts — on se rend compte qu’à la fin, le nombre total de pièces a augmenté de un seulement alors que deux pièces ont été fabriquées. Ce problème ne survient que si les deux routines sont exécutées selon un timing très précis. On comprend donc dans ces conditions qu’un problème de situation de compétition peut rester caché longtemps avant de survenir.

Cette classe de défaut n’existe que dans les systèmes multitâches, mais il faut inclure dans cette définition les systèmes monotâches qui sont susceptibles de recevoir des stimuli extérieurs (« interruptions ») de façon imprévisible. Une situation de compétition peut avoir des effets néfastes pendant une longue période, et le système peut nécessiter d’être réinitialisé.

Pour éliminer les situation de compétitions, il faut s’assurer que les opérations que l’on veut effectuer successivement sont atomiques ou autrement protégées par une exclusion mutuelle.

L’exclusion mutuelle est une méthode permettant d’éviter les situations de compétition et donc de s’assurer que lorsqu’une tâche tente d’accéder à une ressource partagée, les autres tâches seront bloquées en attente de la ressource.

Si l’on empêche les tâches d’accéder en même temps à la même ressource partagée, c’est-à-dire d’entrer dans leur section critique au même moment, les situations de compétition sont évitées.

Différents types d’exclusion mutuelle existent :

Dans un programme informatique qui aurait besoin d’une authentification avant d’effectuer une action, il s’écoule un laps de temps entre la réussite de l’authentification et l’action proprement dite. C’est dans ce laps de temps qu’une compétition peut se produire. Par exemple, un attaquant peut exploiter ce laps de temps pour détourner l’action prévue par le logiciel à ses propres fins.

La concurrence critique peut être utilisée pour prendre le contrôle d’un système au moment précis où une brèche éphémère est ouverte, c’est-à-dire au moment où il est vulnérable.

Nordkapplatået

Nordkapplatået (nordsamisk: Davvenjárga) er en klippe på Magerøya i Finnmark fylke i Norge, i Nordkapp kommune. Den 307 m høye, bratte klippen er ofte omtalt som det nordligste punktet i Europa. Dog er Knivskjelodden ca. 1 380 m lenger nord, og dermed det kontinentale Europas egentlige nordpunkt.

Begge disse stedene ligger på en øy, noe som gjør at det nordligste punktet på fastlandseuropa er Kinnarodden. Magerøya fikk fastlandsforbindelse 15. juni 1999.

På norsk het klippen opprinnelig Knyskanes, men fikk navnet Nordkapp av den engelske utforskeren Richard Chancellor i 1553 da han passerte klippen på leting etter nordøstpassasjen. Etter det ble den sporadisk besøkt av våghalser som tok seg opp det bratte terrenget, blant disse var kong Oskar II i 1873 og kong Rama V av Thailand i 1907.

I dag er Nordkapp et stort turistmål med et eget turistsenter med utstillinger og film fra stedets historie. Nordkapp er nok det norske navnet som er mest kjent utenfor Norges grenser. Platået besøkes årlig av over 200 000 turister.

Miljøverndepartementet krevde i 2011 at prisen for inngang til platået, måtte reduseres fra nivået 160-235 kroner for voksenbillett.

Midnattssol ved Nordkapp

Nordkapp med Knivskjelodden i bakgrunnen

Knivskjellodden sett fra Nordkapplatået

Koordinater:

Valdivienne

Valdivienne ist eine westfranzösische Gemeinde mit 2.715 Einwohnern (Stand: 1. Januar 2013) im Département Vienne in der Region Poitou-Charentes. Sie gehört zum Arrondissement Montmorillon und zum Kanton Chauvigny. Die Einwohner werden Valdiviennois genannt.

Valdivienne liegt etwa 25 Kilometer südöstlich von Poitiers am Fluss Vienne. Umgeben wird Valdivienne von den Nachbargemeinden Chauvigny im Norden und Osten, Chapelle-Viviers im Südosten, Civaux im Süden, Lhommaize im Südwesten, Fleuré und Tercé im Westen sowie Pouillé im Nordwesten.

1969 wurde die Gemeinde durch den Zusammenschluss der Kommunen Morthemer, Salles-en-Toulon und Saint-Martin-la-Rivière gebildet.

Donjon der Burganlage Schloss Morthemer

Kirche Saint-Hilaire

Adriers | Anché | Angles-sur-l’Anglin | Antigny | Asnières-sur-Blour | Asnois | Availles-Limouzine | Blanzay | Bouresse | Bourg-Archambault | Brigueil-le-Chantre | Brion | Brux | Béthines | Ceaux-en-Couhé | Champagné-Saint-Hilaire | Champagné-le-Sec | Champniers | Chapelle-Viviers | Charroux | Chatain | Chaunay | Chauvigny | Château-Garnier | Châtillon | Civaux | Civray | Couhé | Coulonges | Fleix | Genouillé | Gençay | Gouex | Haims | Jouhet | Journet | Joussé | L’Isle-Jourdain | La Bussière | La Chapelle-Bâton | La Ferrière-Airoux | La Trimouille | Lathus-Saint-Rémy | Lauthiers | Le Vigeant | Leignes-sur-Fontaine | Lhommaizé | Liglet | Linazay | Lizant | Luchapt | Lussac-les-Châteaux | Magné | Mauprévoir | Mazerolles | Millac | Montmorillon | Moulismes | Moussac | Mouterre-sur-Blourde | Nalliers | Nérignac | Paizay-le-Sec | Payroux | Payré | Persac | Pindray | Plaisance | Pressac | Queaux | Romagne | Saint-Gaudent | Saint-Germain | Saint-Laurent-de-Jourdes | Saint-Léomer | Saint-Macoux | Saint-Martin-l’Ars | Saint-Maurice-la-Clouère | Saint-Pierre-d’Exideuil | Saint-Pierre-de-Maillé | Saint-Romain | Saint-Savin | Saint-Saviol | Saint-Secondin | Sainte-Radégonde | Saulgé | Savigné | Sillars | Sommières-du-Clain | Surin | Thollet | Usson-du-Poitou | Valdivienne | Vaux | Verrières | Villemort | Voulon | Voulême

Woodford (Kalifornia)

Woodford – obszar niemunicypalny w Hrabstwie Kern, w Kalifornii (Stany Zjednoczone), na wysokości 837 m.

Actis • Aerial Acres • Alameda • Algoso • Alta Sierra • Annette • Ansel • Armistead • Baker • Bannister • Bealville • Bena • Bissell • Blackwells Corner • Bowerbank • Bradys • Brown • Burton Mill • Cable • Calders Corner • Calico • Caliente • Cameron • Camp Owens • Canebrake • Cantil • Cawelo • Ceneda • Chaffee • Cherokee Strip • China Lake • Cinco • Claraville • Conner • Crome • Desert Lake • Di Giorgio • Dow • East Bakersfield • Edison • Edmundson Acres • Edwards • El Rita • Elmo • Famoso • Fig Orchard • Five Points • Fleta • Fruitvale • Fuller Acres • Garlock • Glennville • Goler Heights • Gosford • Grapevine • Greenacres • Greenfield • Gulf • Gypsite • Halfway House • Harpertown • Harts Place • Havilah • Hazelton • Hights Corner • Hollis • Ilmon • Indian Wells • Jasmin • Jastro • Kayandee • Kecks Corner • Kern • Kern Lake • Kernell • Keyesville • Kilowatt • Lackey Place • Lakeview • Landco • Lerdo • Lokern • Lonsmith • Loraine • Magunden • Maltha • Marcel • Mayfair • Meridian • Mexican Colony • Midoil • Millersville • Millux • Minter Village • Miracle Hot Springs • Missouri Triangle • Mitchells Corner • Monolith • Moreland Mill • Myricks Corner • Neufeld • North Belridge • North Shafter • Oil City • Oil Junction • Old Garlock • Old River • Old Town • Palmo • Panama • Patch • Pentland • Pettit Place • Pinon Pines Estates • Pond • Prospero • Pumpkin Center • Quality • Rancho Seco • Rand • Reward • Ribier • Ricardo • Rich • Rio Bravo • Riverkern • Rowen • Saco • Sageland • Saltdale • San Emidio • Sanborn • Sand Canyon • Searles • Seguro • Semitropic • Shirley Meadows • Silt • Slater • Smith Corner • South Lake • Spellacy • Spicer City • Stevens • Summit • Thomas Lane • Twin Lakes • Twin Oaks • Una • Venola • Vinland • Walker Basin • Wallace Center • Walong • Wheeler Ridge • Wible Orchard • Willow Springs • Woodford • Woody • Zentner

Kourtney Kardashian

Kourtney Mary Kardashian (Los Angeles, 18 april 1979) is een Amerikaanse televisiepersoonlijkheid en socialite. Ze is bekend geworden door de realityserie Keeping Up with the Kardashians, de spil van het commerciële imperium van de familie Kardashian. Ze is, net als haar beide zussen, een behendig exploitant van haar status als beroemdheid.

Kourtney is een dochter van Robert Kardashian, een advocaat van Armeense afkomst. Ze is ook de oudere zus van Kim, Khloé en heeft een broer. Haar moeder is later hertrouwd met atleet Bruce Jenner, en hieruit heeft zij de twee halfzusjes Kendall Jenner en Kylie Jenner. De realityserie Keeping up with the Kardashians toont het glamoureuze leven van deze gelieerde families. Kourtney deed mee aan de spin-off-series Kourtney and Khloe Take Miami en Kourtney and Kim Take New York.

Sinds 2006 had ze een relatie met model en reality-tv-ster Scott Disick, met wie ze drie kinderen heeft Mason Dash Disick , Penelope Scotland Disick en Reign Aston Disick. Begin juli 2015 ging het koppel uit elkaar.