# onepass_incomplete_multiview_clustering__6ce99f17.pdf The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) One-Pass Incomplete Multi-View Clustering Menglei Hu, Songcan Chen 1College of Computer Science & Technology, Nanjing University of Aeronautics & Astronautics 2Collaborative Innovation Center of Novel Software Technology and Industrialization {ml.hu, s.chen}@nuaa.edu.cn Real data are often with multiple modalities or from multiple heterogeneous sources, thus forming so-called multi-view data, which receives more and more attentions in machine learning. Multi-view clustering (MVC) becomes its important paradigm. In real-world applications, some views often suffer from instances missing. Clustering on such multi-view datasets is called incomplete multi-view clustering (IMC) and quite challenging. To date, though many approaches have been developed, most of them are offline and have high computational and memory costs especially for large scale datasets. To address this problem, in this paper, we propose an One-Pass Incomplete Multi-view Clustering framework (OPIMC). With the help of regularized matrix factorization and weighted matrix factorization, OPIMC can relatively easily deal with such problem. Different from the existing and sole online IMC method, OPIMC can directly get clustering results and effectively determine the termination of iteration process by introducing two global statistics. Finally, extensive experiments conducted on four real datasets demonstrate the efficiency and effectiveness of the proposed OPIMC method. Introduction With the increase of diverse data acquisition devices, real data are often with multiple modalities or from multiple heterogeneous sources (Blum and Mitchell 1998), forming socalled multi-view data (Son et al. 2017). For example, a web document can be represented by its url and words on the page; images of a 3D object are taken from different viewpoints (Sun 2013). In multi-view datasets, the consistency and complementary information among different views need to be exploited for learning task at hand such as classification and clustering (Zhao, Ding, and Fu 2017). Nowadays, multi-view learning has been widely studied in different areas such as machine learning, data mining and artificial intelligence (Xing et al. 2017; Tulsiani et al. 2017; Nie et al. 2018). Multi-view Clustering (MVC), as one of the most important tasks of multi-view learning, has attracted unimaginable attention due to preventing the expensive requirement of data labeling (Bickel and Scheffer 2004; Fan et al. 2017). The pursuit of MVC is how to make full use of both consistency and complementary information among multi-view Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. data to get a better clustering result. To date, a variety of related methods have been proposed and these can roughly be divided into two main categories: subspace approaches (Ding and Fu 2014; Cao et al. 2015; Li 2016) and spectral approaches (Kumar and Daum e 2011; Tao et al. 2017; Ren et al. 2018). The former try to learn a shared latent subspace such that different dimensionality views are comparable in this space. Whereas the latter aim to learn a unified similarity matrix among multi-view data by extending single-view spectral clustering approaches. A normal assumption for most of above methods is that all the views are complete, meaning that all the instances appear in individual views and correspond to each other. However, in real-world applications, some views often suffer from instances missing which makes some instances in one view unnecessarily have corresponding instances. Such incompleteness will bring a great difficulty for MVC. Clustering on such incomplete multi-view dataset is called incomplete multi-view clustering (IMC). So far, many approaches have also been developed (Li, Jiang, and Zhou 2014; Shao, He, and Philip 2015; Zhao, Liu, and Fu 2016; Liu et al. 2017; Wen et al. 2018; Hu and Chen 2018). Nevertheless, almost all these approaches are offline and can hardly handle large scale datasets because of their high time and space complexities. In data explosion age, the size of individual views data is often huge. For example, video of hundreds of hours is uploaded to You Tube every minute, which appears in multiple modalities or views, namely audio, text and visual views. Another example is in Web scale data mining, one may encounter billions of Web pages and the dimension of the features may be as large as O(106). Data in such scale is hard to store in the memory and process in offline way. To our best knowledge, to date, only one method OMVC is proposed for the large scale IMC problem (Shao et al. 2016). However, OMVC still suffers from some problems in such aspects as normalizing data matrix, handling missing instances, determining convergence and so on. Therefore, solving large scale IMC problem is still very urgent. In this paper, we propose an One-Pass Incomplete Multiview Clustering framework (OPIMC) for large scale multiview datasets based on subspace learning. OPIMC can easily address IMC problem with the help of Regularized Matrix Factorization (RMF) (Gunasekar et al. 2017; Qi et al. 2017) and Weighted Matrix Factorization (WMF) (Kim and Choi 2009). Furthermore, OPIMC can directly get clustering results and effectively determine the termination of iteration by introducing the two global statistics which can yield a prominent reduction in clustering time. In the following, we firstly give a brief review of some related work. Secondly, we detail our OPIMC approach and give the optimization. Thirdly, we report the experimental results. And finally, we conclude the paper. Related Work Multi-view Clustering. As mentioned in the introduction, a variety of multi-view clustering methods have been proposed and these can roughly be divided into two categories: subspace approaches (Li 2016) and spectral approaches (Ren et al. 2018). Contrasting with the spectral approaches, the subspace approaches have become a main paradigm due to less time and space complexities, they try to learn a latent subspace so that different dimensionality views are close to each other in this space. Among the subspace approaches, nonnegative matrix factorization (NMF)(Lee and Seung 1999) has become a dominating technique because it can be conveniently applied for clustering and subsequently many NMF based methods and their variants have been proposed. For examples, (Liu et al. 2013) establishes a joint NMF model for multi-view clustering, which performs NMF for each view and pushes low dimensional representation of each view towards a common consensus. Besides, manifold learning is also considered for multi-view clustering problem. By imposing the manifold regularization on the objective function of NMF for data of individual views (Wang, Yang, and Li 2016; Zong et al. 2017), these methods get a relatively better results. Here, just to name a few, for more related works on MVC, please refer to (Chao, Sun, and Bi 2017; Sun 2013) Incomplete Multi-view Clustering. Most of these previous studies on multi-view clustering assume that all instances present in all views. However, this assumption is not always to be held in real world applications. For example, in the camera network, for some reasons, such as the camera temporarily fail or be blocked by some objects, making the instance missing. This case will cause the incompleteness of multi-view data. Recently, some incomplete multi-view clustering methods have been proposed. For instance, (Li, Jiang, and Zhou 2014) proposes PVC to establish a latent subspace where the instances corresponding to the same object in different views are close to each other, and similar instances in the same view should be well grouped by utilizing instance alignment information. Besides, a method of clustering more than two incomplete views is proposed in (Shao, He, and Philip 2015)(MIC) by firstly filling the missing instances with the average feature values in each incomplete view, then handling the problem with the help of weighted NMF and L2;1-Norm regularization (Kong, Ding, and Huang 2011; Wu et al. 2018). Moreover, (Hu and Chen 2018) proposes DAIMC, which extends PVC to multi-view case by utilizing instance missing information and aligning the clustering centers among different views simultaneously. Online Incomplete Multi-view Clustering. In data explo- sion age, multi-view data tends to be large scale. However the above approaches for incomplete multi-view are almost all offline and can hardly conduct the large scale datasets due to their high time and space complexities. Online learning, as an efficient strategy to build large-scale learning systems, has attracted much attention during the past years (Nguyen et al. 2015; Wan, Wei, and Zhang 2018). As a special case of online learning, one-pass learning (OPL) (Zhu, Ting, and Zhou 2017) has the benefit of requiring only one pass over the data and is particularly useful and efficient for streaming data. To our best knowledge, to date, only one method extends MIC to online case and develops so-called OMVC (Shao et al. 2016) by combining online learning and incomplete multi-view clustering. Nevertheless, OMVC still suffers from some problems in the following aspects: 1. Normalization for dataset: OMVC normalizes the multiview datasets by summing all elements of the data, which is unreasonable in online learning. 2. Imputation for missing instances: Due to the mechanism of online learning, it is difficult to get the average feature values in each incomplete view to fill the missing instances. 3. Efficiency: OMVC works by learning a consensus latent feature matrix across all the views and then applies K-means on this matrix to get the clustering results, which brings high computational cost when both the instance number and the category number are large. 4. Termination determination for iterative convergence: OMVC terminates the iteration process by using all the scanned instances, which is not only unreasonable but also time-consuming and laborious. Considering these disadvantages of the OMVC, we propose a more general and feasible incomplete multi-view clustering algorithm, which can deal with large-scale incomplete multi-view data efficiently and effectively. Proposed Approach Preliminaries Given an input data matrix X RM N, where each column of X is an instance. Regularized Matrix Factorization (RMF) aims to approximately factorize the data matrix X into two matrices U and V with the Frobenius norm regularized constraint for U, V. Then we can get the following minimization problem min U,V X UVT 2 F + α U 2 F + α V 2 F (1) where low-rank regularized factor matrices U RM K and V RN K, K denotes dimension of subspace. α is nonnegative parameter. Obviously, this is a biconvex problem. Thus we can easily get the updating rules to find the locally optimal solution for this problem as follows: Update U (while fixing V) using the rule U = XV(VT V + αIK) 1 (2) Update V (while fixing U) using V = XT U(UT U + αIK) 1 (3) Weighted Matrix Factorization (WMF), as one of the most commonly used methods for missing matrix, is widely used for recommender systems (Xue et al. 2017). The WMF optimization problem is formulated as: min U,V (X UVT )W 2 F (4) where W contains entries only in {0, 1}, and Wij = 0 when the entry Xij is missing. One-Pass Incomplete Multi-view Clustering Given a set of input incomplete multi-view data matrices {X(i) Rdi N, v = 1, 2, , nv}, where di, N represent the dimensionality and instance number respectively. In order to describe directly and conveniently, the missing instances of individual views are filled with 0. Here we introduce an indicate matrix M Rnv N for this incomplete multi-view dataset. Mvj = 1 if j-th instance is in the v-th view 0 otherwise (5) where each row of M represents the instance presence or absence for corresponding view. From the matrix M, we can easily get the missing information of individual views and aligned information across different views. For the v-th view, inspired by Regularized Matrix Factorization, we can factorize the data matrix X(v) Rdv N into two matrices U(v) and V(v), where U(v) Rdv K, V(v) RN K, and K denotes dimension of subspace, equal to the categories of the dataset. Furthermore, in order to avoid the third problem of OMVC, we apply an 1-of-K coding constraint to V(v), which causes V(v) 2 F = N. Thus we can get the following model: min U,V X(v) U(v)V(v)T 2 F + α U(v) 2 F s.t. V(v) ik {0, 1}, k=1 V(v) ik = 1, i = 1, 2, , N (6) For multi-view dataset, (6) does not consider the consistency information across different views. To address this issue, we assume that different views have distinct matrices {U(i)}nv i=1 , but share the same matrix V. Meanwhile, we consider the instance missing information to handle the incompleteness of each view with the help of Weighted Matrix Factorization. Thus, (6) is rewritten as: v=1 { (X(v) U(v)VT )W(v) 2 F + α U(v) 2 F} s.t. Vik {0, 1}, k=1 Vik = 1, i = 1, 2, , N where the weighted matrix W(v) RN N is defined as: W(v) jj = 1 if j-th instance is in the v-th view 0 otherwise (8) In real-world applications, the data matrices may be too large to fit into the memory. We propose to solve the above optimization problem in an online fashion with low computational and storage complexities. We assume that the data of each view is get by chunks and whose size is s. Thus the objective function can be decomposed as: t=1 (X(v) t U(v)VT t )W(v) t 2 F + α U(v) 2 F} s.t. Vik {0, 1}, k=1 Vik = 1, i = 1, 2, , N (9) where X(v) t is the t-th data chunk in the v-th view, Vt is the clustering indicator matrix for the t-th data chunk, and W(v) t is the diagonal weight matrix for the t-th data chunk. Optimization From (9), we can find that it is biconvex for {U(v)} and Vt at each time t. So we update {U(v)} and Vt in an alternating way. Firstly, we will give the normalization of the dataset. Normalization: In multi-view data, there are scaling differences among views. In order to reduce these differences and improve the clustering results, the appropriate normalization is necessary. However, due to the mechanism of online learning, it is difficult to normalize the dataset using global information such as mean and variance. In this paper, instead we map all the instances to a hypersphere, i.e. X(v)(:, j) = 1. Next, we describe the following subproblems for the OPIMC optimization problem. Subproblem of {U(v)}nv v=1. With Vt fixed, for each U(v), the partial derivation of J (U(v)) with respect to U(v) is i=1 2(U(v)VT i X(v) i )W(v) i W(v)T i Vi + 2αU(v) (10) From the definition of W(v), we can see that W(v) i = W(v) i W(v)T i . Meanwhile, due to the zero filling of dataset, let J / U(v) = 0, we get the following updating rule: i=1 X(v) i Vi( i=1 VT i W(v) i Vi + αIK) 1 (11) Here, for the sake of convenience, we introduce two terms R(v) t and T(v) t as below: i=1 X(v) i Vi T(v) t = i=1 VT i W(v) i Vi (12) Consequently, (11) can be rewritten as: U(v) = R(v) t (T(v) t + αIK) 1 (13) Then, when new chunk coming, the matrices R(v) t and T(v) t can be updating easily as follows: R(v) t = R(v) t 1 + X(v) t Vt T(v) t = T(v) t 1 + VT t W(v) t Vt (14) Subproblem of {Vt}. With {U(v)}nv v=1 fixed and inspired by K-means, we introduce a matrix D Rs K to record the distance between all the instances (the column of X(v) t ) and all the clustering centers (the column of {U(v)}nv v=1) among all the views. v=1 (X(v) t,i U(v) j )W(v) t,ii 2 F (15) where X(v) t,i denotes the i-th instance of X(v) t and W(v) t,ii de- notes the entry (i, i) of W(v) t . Note that the indexes of all the row minimum values in matrix D represent the clustering indicators of the corresponding instances. Thus, we can get the following updating rule for Vt: [ , index] = min(D, [ ], 2), Vt = full(sparse(1 : s, index, 1, s, K, s)). (16) where (16) is two matlab instructions. From the above procedure, we have solved the first three problems of OMVC (Shao et al. 2016). In the following we will present the solution to OMVC s fourth problem. Termination determination for iterative convergence: By unfolding the objective function (9), we can get t=1 (X(v) t U(v)VT t )W(v) t 2 F + α U(v) 2 F} t=1 tr(X(v)T t X(v) t ) 2tr(U(v)T X(v) t Vt) + tr(VT t W(v) t Vt U(v)T U(v)) + α U(v) 2 F} =Nnv(1 ratio) v=1 {2tr(U(v)T R(v) N ) + tr(U(v)T U(v)T(v) N ) + α U(v) 2 F} (17) where ratio denotes the incomplete rate of the dataset and tr denotes the matrix trace. From (17), by recording the statistics of R and T, we can easily get the loss of all the scanned instances. Moreover, the memory space requirement for this operation is very small, i.e. O(dvs). It is worth noting that for the first initial chunk, because of the random initialization of U,V and the small size of the chunk, in updating U, some clustering centers are likely to be degraded. In order to prevent this, in the iterative update of the first chunk, we use the chunk average values to fill the degenerative clustering centers. While in the iterative update for other chunks, we use the last corresponding values to fill. The experiment results verify the effectiveness of this operation. The entire optimization procedure for OPIMC is summarized in Algorithm 1. Convergence The convergence of the OPIMC can be proved by the following theorem. Algorithm 1 One-Pass Incomplete Multi-view Clustering Require: Data matrices for incomplete views {X(v)}, weight matrices {W(v)}, parameter α, number of clusters K. 1: R(v) 0 = 0, T(v) 0 = 0 for each view v. 2: for t = 1 : N/s do 3: Draw {X(v) t } for all the views. 4: if t = 1 then 5: Initialize the {U(v)}, Vt with random value. 6: else 7: Initialize the Vt according to Eq.(15-16). 8: end if 9: repeat 10: for v = 1 : nv do 11: Update U(v) according to Eq.(11-13). 12: end for 13: Fill the degenerative clustering centers 14: Update Vt according to Eq.(15-16) 15: until converges 16: Update R(v) t and T(v) t according to Eq.(14). 17: end for 18: Get clustering results according to V. 19: return {U(v)} and clustering results. Theorem 1 The objective function value of Eq.(9) is nonincreasing under the optimization procedure in Algorithm 1. Proof of Theorem 1: As shown in Algorithm 1, the optimization of OPIMC can be divided into two subproblems, each of which is convex w.r.t one variable. Thus, by finding the optimal solution for each subproblem alternatively, our algorithm can at least find a locally optimal solution. Complexity Time Complexity: The computational complexity of OPIMC algorithm is dominated by matrix multiplication and inverse operations. We discuss this problem in two aspects: optimizing U(v), optimizing {Vt}. Here we assume that K dv, s and N. Thus, the time complexities for updating U(v) and {Vt} are both O(dv Ks). Suppose L, dmax are the iteration times of the loop and the largest dimensionality of all the views respectively, by considering the chunk number N/s , we can get the overall computational complexity of O(Lnvdmax KN). It is worth noting that through experiments we find that OPIMC converges quickly, thus setting L = 20 is enough. Space Complexity: The proposed OPIMC algorithm only requires O(nvdmaxs) memory space (s N). By recording two global statistics R and T, OPIMC can easily update U, V and determinate convergence with the scanned instances. Experiment Data Sets In this paper, we conduct the experiments on four real-world multi-view datasets, which contains two small datasets and Table 1: Statistics of the datasets Dataset Instance View Cluster Web KB1 1051 Content(3,000), Anchor text (1,840) 2 Digit2 2000 Fourier (76), Profile (216), Karhunen-Loeve (64), Pixel (240), Zernike (47) 10 Reuters3 111740 English (21,531), French (24,893), German (34,279), Spanish (15,506), Italian (11,547) 6 Youtube4 92457 Vision (512), Audio (2,000), Text (1,000) 31 two large datasets, where Reuters and Youtube are known to be the largest benchmark datasets used for multi-view clustering experiments currently. The important statistics of these datasets are given in the Table 1. Compared Methods We compare OPIMC with several state-of-art methods. OPIMC: OPIMC is the proposed one-pass incomplete multi-view clustering method in this paper. We search the parameter α in {1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3}. IMC: As shown in (7), IMC is the offline case of OPIMC. OMVC: OMVC is an online incomplete multi-view clustering method proposed in (Shao et al. 2016). To facilitate comparison, we set αvs(βvs) the same value for all the views. Meanwhile, we select the parameter α within the set of {1e-3, 1e-2, 1e-1, 1e0} and select the parameter β within {1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2}. Multi NMF: Multi NMF is a classic offline method for multiview clustering proposed in (Liu et al. 2013). We select the parameter α within {1e-3, 1e-2, 1e-1, 1e0}. ONMF: ONMF is an online document clustering algorithm for single view using NMF (Wang et al. 2011). In order to apply ONMF, we simply concatenate all the normalized views together to form a big single view. We compare two versions of ONMF from the original paper. ONMFI is the original algorithm that calculates the exact inverse of Hessian matrix, while ONMFDA uses diagonal approximation for the inverse of Hessian matrix. To simulate the incomplete view setting, we randomly remove some instances from each view. On Web KB and Digit datasets, we set the incomplete rate to 0.3 and 0.4 respectively for the experiment. Besides, we set the incomplete rate to 0.4 on Reuters and Youtube datasets. Meanwhile we shuffle the order of the samples to fit the more real online scene. The chunk size s for online methods is set to 50 for small datasets and 2000 for large datasets, respectively. Meanwhile, it is worth mentioning that Multi NMF and ONMF can only deal with complete multi-view dataset, in order to the completeness of the experiment, we firstly fill the missing instances in each incomplete view using average feature values. 1http://vikas.sindhwani.org/manifoldregularization.html 2http://archive.ics.uci.edu/ml/datasets/Multiple+Features 3http://archive.ics.uci.edu/ml/machine-learning-databases/ 00259/ 4https://archive.ics.uci.edu/ml/datasets/You Tube+Multiview +Video+Games+Dataset The normalized mutual information (NMI) and precision (AC) clustering evaluation measures are used in this paper. For online and one-pass methods, in order to more comprehensively compare with OMVC and ONMF, we also conduct the experiments for 10 passes and report both NMI and AC for different passes. The experimental results are shown in Figure 1. Results Figure 1 reports the performance of clustering on Web KB, Digit, Reuters and Youtube datasets for different passes with different incomplete rates. From Figure 1, we can get the following results. From Figure 1(a) and Figure 1(b), we can see that on Web KB dataset, the offline method IMC achieves the best performance, the proposed OPIMC gets close performance after just two passes and outperforms the other four comparison methods. The same phenomena can be observed from Figures 1(c), 1(d), 1(g) and 1(h) on Digit dataset. From Figure 1(e) and Figure 1(f), we can see that OPIMC performs terribly on Web KB dataset in the first few passes for the incomplete rate of 0.4. The main reasons are that the large incomplete rate and the small size of the chunk, which cause the matrices {U(v)} hard to be learned. However, after few passes, through continuous correction of global information, the clustering performance on Web KB dataset grows rapidly. On large scale Reuters dataset, from Figure 1(i) and Figure 1(j), we can see that OPIMC gets the best results after only one pass, but the clustering performance decreases with the pass number increasing. From Figure 1(k) and Figure 1(l), we can find that on Youtube dataset, OPIMC produces excellent results and much better than the other methods. This fully demonstrates the effectiveness of OPIMC. Complexity Study: All the experiments are run on computer with Intel(R)390 Core(TM) i5-3470 @ 3.20GHz CPU and 16.0 GB RAM with the help of Matlab R2013a. The complexity study results are reported in Table 2. From Table 2, we can get some observations. Firstly, OMVC gets better results than ONMFI and ONMFDA, but the latter two methods run faster than OMVC. Secondly, the offline method IMC runs faster than the other methods except OPIMC. Thirdly, compared with OMVC, OPIMC takes much less running time (only 1%-2% of OMVC running time), while obtains relatively better clustering results. All these observations prove the efficiency and effectiveness of our model. Parameter Study: We conduct the parameter experiments on the four aforementioned datasets for just one pass. Mean- 0 1 2 3 4 5 6 7 8 9 10 0.7 (a) AC for 0.3 missing Web KB 0 1 2 3 4 5 6 7 8 9 10 0 0.3 0.4 0.5 0.6 0.7 0.8 (b) NMI for 0.3 missing Web KB 0 1 2 3 4 5 6 7 8 9 10 0.3 (c) AC for 0.3 missing Digit 0 1 2 3 4 5 6 7 8 9 10 0.2 (d) NMI for 0.3 missing Digit 0 1 2 3 4 5 6 7 8 9 10 0.6 (e) AC for 0.4 missing Web KB 0 1 2 3 4 5 6 7 8 9 10 0 (f) NMI for 0.4 missing Web KB 0 1 2 3 4 5 6 7 8 9 10 0.3 (g) AC for 0.4 missing Digit 0 1 2 3 4 5 6 7 8 9 10 0.2 (h) NMI for 0.4 missing Digit 0 1 2 3 4 5 6 7 8 9 10 0.3 (i) AC for 0.4 missing Reuters 0 1 2 3 4 5 6 7 8 9 10 0.1 (j) NMI for 0.4 missing Reuters 0 1 2 3 4 5 6 7 8 9 10 0 (k) AC for 0.4 missing Youtube 0 1 2 3 4 5 6 7 8 9 10 0 (l) NMI for 0.4 missing Youtube IMC OPIMC OMVC ONMFI ONMFDA Multi NMF Figure 1: Performance of clustering on Web KB, Digit, Reuters and Youtube for different passes. Table 2: Run time for different methods Run Time (seconds) Web KB Digit Reuters Youtube OPIMC/Pass 0.25 0.56 27.89 26.76 OMVC/Pass 23.37 34.76 3753.02 2064.83 ONMFI/Pass 18.69 31.16 2887.12 1657.22 ONMFDA/Pass 20.09 30.63 2224.44 1307.14 IMC 2.91 6.31 / / Multi NMF 149.7 647.2 / / while, we set the incomplete rate as 0.3 for small datasets and 0.4 for large scale datasets respectively, and report the clustering performance of OPIMC by ranging α in the set of {1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3}. The results are shown in Figure 2. From Figure 2, we can see that OPIMC gets best clustering results in α = {1e1, 1e-1, 1e-2, 1e0} on Web KB, Digit, Reuters and Youtube datasets respectively. Convergence Study: The convergence experiments are conducted on the four aforementioned datasets for 20 passes. We set the incomplete rate as 0.4 for all the datasets and conduct the experiments. According to the definition of R(v), 4 3 2 1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 4 3 2 1 0 1 2 3 0 Web KB Digit Reuters Youtube Figure 2: Parameter studies on Web KB, Digit, Reuters and Youtube datasets, where the incomplete rate of Web KB and Digit experiment is set as 0.3, and the incomplete rate of Reuters and Youtube experiment is set as 0.4. T(v), and inspired by ONMF, OMVC, for the first pass, the average loss is defined as follows: L = 1 min{s t, N} 2P(v) t + Q(v) t + α U(v) 2 F 0 100 200 300 400 0.08 Average Loss (a) Average Loss on Web KB 0 200 400 600 800 2.5 Average Loss (b) Average Loss on Digit 0 500 1000 0.125 Average Loss (c) Average Loss on Reuters 0 200 400 600 800 0.94 Average Loss (d) Average Loss on Youtube Figure 3: Convergence studies on Web KB, Digit, Reuters and Youtube datasets, where the incomplete rate is set as 0.4, and the experiments are run for 20 passes, the corresponding average loss L is recorded. It is worth mentioning that since we ignore the loss of tr(X(v)T X(v)), the average loss L is negative. 0 1 2 3 4 5 6 7 8 9 10 0.2 0 1 2 3 4 5 6 7 8 9 10 0.2 s=250 s=100 s=50 s=10 s=5 s=2 Figure 4: Different block size study on Digit dataset, where the incomplete rate is set as 0.4 and the experiment is run for 10 passes. where P(v) t = tr(U(v)T R(v) t ) Q(v) t = tr(U(v)T U(v)Tt) (19) And for the other passes, since we can easily count the loss of scanned instances, we define the average loss as follows: 2P(v) N + Q(v) N + α U(v) 2 F (20) We cascade all pass losses and get the results as shown in Figure 3. From Figure 3, we can see that, as the training goes on, the average loss converge gradually. Corresponding to Figure 1, we can observe that when the average loss converges, both NMI and AC get stable values. Block Size Study: In OPIMC, the size of data chunk is a vitally important parameter. In order to study the performance of OPIMC with different chunk sizes, we conduct a block size study on digit dataset. Besides, we set the incomplete rate to 0.4, and report the clustering performance of OPIMC by ranging s in the set of {2, 5, 10, 50, 100, 250}. Meanwhile, we run the experiment for 10 passes and the results are shown in Figure 4. From Figure 4, we can see that generally the bigger the block size, the better the clustering results. Furthermore, when s = 250, the NMI and AC get a great value. However, 0 1 2 3 4 5 6 7 8 9 10 0.3 0 1 2 3 4 5 6 7 8 9 10 0.3 OPIMC F OPIMC NF OMVC ONMFI ONMFDA Figure 5: Clustering center degradation study on Digit dataset, where OPIMC-F and OPIMC-NF denote the OPIMC with filled and not filled degraded cluster centers, respectively. Besides, the incomplete rate is set as 0.4 and the experiment is run for 10 passes. using larger chunk size will cause larger space complexity. Clustering Center Degradation Study: In this experiment, we will prove the validity of filling the degraded cluster centers. we conduct the experiment on Digit dataset with the incomplete rate of 0.4. We do not disrupt the instance order of the Digit dataset and implement OPIMC with filled (OPIMC-F) and not filled (OPIMC-NF) degraded cluster centers, respectively. We run the experiment for 10 passes and the results are shown in Figure 5, from which we can witness the effect of filling degenerate cluster centers very directly. In this paper, we propose an efficient and effective method to deal with large scale incomplete multi-view clustering problem by adequately considering the instance missing information with the help of regularized matrix factorization and weighted matrix factorization. By introducing two global statistics, OPIMC can directly get clustering results and effectively determine the termination of iteration process. The experimental results on four real-world multi-view datasets demonstrate the efficiency and effectiveness of our method. In the future, the generation of new classes and the robustness of algorithms will be the focus of our consideration. Acknowledgments This work is supported in part by the NSFC under Grant No.61672281, and the Key Program of NSFC under Grant No.61732006 References Bickel, S., and Scheffer, T. 2004. Multi-view clustering. In ICDM, 19 26. Blum, A., and Mitchell, T. 1998. Combining labeled and unlabeled data with co-training. In COLT, 92 100. Cao, X.; Zhang, C.; Fu, H.; Liu, S.; and Zhang, H. 2015. Diversity-induced multi-view subspace clustering. In CVPR, 586 594. Chao, G.; Sun, S.; and Bi, J. 2017. A survey on multi-view clustering. ar Xiv preprint ar Xiv:1712.06246. Ding, Z., and Fu, Y. 2014. Low-rank common subspace for multi-view learning. In ICDM, 110 119. Fan, Y.; Liang, J.; He, R.; Hu, B.-G.; and Lyu, S. 2017. Robust localized multi-view subspace clustering. ar Xiv preprint ar Xiv:1705.07777. Gunasekar, S.; Woodworth, B. E.; Bhojanapalli, S.; Neyshabur, B.; and Srebro, N. 2017. Implicit regularization in matrix factorization. In NIPS, 6151 6159. Hu, M., and Chen, S. 2018. Doubly aligned incomplete multi-view clustering. In IJCAI, 2262 2268. Kim, Y.-D., and Choi, S. 2009. Weighted nonnegative matrix factorization. In ICASSP, 1541 1544. Kong, D.; Ding, C.; and Huang, H. 2011. Robust nonnegative matrix factorization using L21-norm. In CIKM, 673 682. Kumar, A., and Daum e, H. 2011. A co-training approach for multi-view spectral clustering. In ICML, 393 400. Lee, D. D., and Seung, H. S. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788 791. Li, S.-Y.; Jiang, Y.; and Zhou, Z.-H. 2014. Partial multi-view clustering. In AAAI, 1968 1974. Li, Y. 2016. Advances in multi-view matrix factorizations. In IJCNN, 3793 3800. Liu, J.; Wang, C.; Gao, J.; and Han, J. 2013. Multi-view clustering via joint nonnegative matrix factorization. In SDM, 252 260. Liu, X.; Li, M.; Wang, L.; Dou, Y.; Yin, J.; and Zhu, E. 2017. Multiple kernel k-means with incomplete kernels. In AAAI, 2259 2265. Nguyen, T. D.; Le, T.; Bui, H.; and Phung, D. Q. 2015. Large-scale online kernel learning with random feature reparameterization. In IJCAI, 2750 2756. Nie, F.; Cai, G.; Li, J.; and Li, X. 2018. Autoweighted multi-view learning for image clustering and semisupervised classification. IEEE Transactions on Image Processing 27(3):1501 1511. Qi, M.; Wang, T.; Liu, F.; Zhang, B.; Wang, J.; and Yi, Y. 2017. Unsupervised feature selection by regularized matrix factorization. Neurocomputing 593 610. Ren, P.; Xiao, Y.; Xu, P.; Guo, J.; Chen, X.; Wang, X.; and Fang, D. 2018. Robust auto-weighted multi-view clustering. In IJCAI, 2644 2650. Shao, W.; He, L.; Lu, C.-T.; and Yu, P. S. 2016. Online multi-view clustering with incomplete views. In ICBDA, 1012 1017. Shao, W.; He, L.; and Philip, S. Y. 2015. Multiple incomplete views clustering via weighted nonnegative matrix factorization with L2,1 regularization. In ECML PKDD, 318 334. Son, J. W.; Jeon, J.; Lee, A.; and Kim, S.-J. 2017. Spectral clustering with brainstorming process for multi-view data. In AAAI, 2548 2554. Sun, S. 2013. A survey of multi-view machine learning. Neural Computing and Applications 23(7-8):2031 2038. Tao, Z.; Liu, H.; Li, S.; Ding, Z.; and Fu, Y. 2017. From ensemble clustering to multi-view clustering. In IJCAI, 2843 2849. Tulsiani, S.; Zhou, T.; Efros, A. A.; and Malik, J. 2017. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 209 217. Wan, Y.; Wei, N.; and Zhang, L. 2018. Efficient adaptive online learning via frequent directions. In IJCAI, 2748 2754. Wang, F.; Tan, C.; Li, P.; and K onig, A. C. 2011. Efficient document clustering via online nonnegative matrix factorizations. In SDM, 908 919. Wang, H.; Yang, Y.; and Li, T. 2016. Multi-view clustering via concept factorization with local manifold regularization. In ICDM, 1245 1250. Wen, J.; Zhang, Z.; Xu, Y.; and Zhong, Z. 2018. Incomplete multi-view clustering via graph regularized matrix factorization. ar Xiv preprint ar Xiv:1809.05998. Wu, B.; Wang, E.; Zhu, Z.; Chen, W.; and Xiao, P. 2018. Manifold nmf with L21 norm for clustering. Neurocomputing 273:78 88. Xing, J.; Niu, Z.; Huang, J.; Hu, W.; Zhou, X.; and Yan, S. 2017. Towards robust and accurate multi-view and partiallyoccluded face alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 40:987 1001. Xue, H.-J.; Dai, X.; Zhang, J.; Huang, S.; and Chen, J. 2017. Deep matrix factorization models for recommender systems. In IJCAI, 3203 3209. Zhao, H.; Ding, Z.; and Fu, Y. 2017. Multi-view clustering via deep matrix factorization. In AAAI, 2921 2927. Zhao, H.; Liu, H.; and Fu, Y. 2016. Incomplete multi-modal visual data grouping. In IJCAI, 2392 2398. Zhu, Y.; Ting, K. M.; and Zhou, Z.-H. 2017. New class adaptation via instance generation in one-pass class incremental learning. In ICDM, 1207 1212. Zong, L.; Zhang, X.; Zhao, L.; Yu, H.; and Zhao, Q. 2017. Multi-view clustering via multi-manifold regularized nonnegative matrix factorization. Neural Networks 88:74 89.