Given a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) of agents \(\mathcal{N}=\{1,\ldots,N\}\) connected with edges in \(\mathcal{E}\), we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions \(\{\Phi_i: i=1,\ldots,N\}\) while respecting privacy requirements, where \(\Phi_i:= \xi_i + f_i\) belongs to agent-\(i\). Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient method PG-ADMM. In one iteration, each agent-\(i\) computes the prox map of \(\xi_i\) and gradient of \(f_i\), and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SPG-ADMM, which can only access to noisy estimates of \(\nabla f_i\) at each agent-\(i\). These optimization models abstract a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both sub-optimality error and consensus violation for PG-ADMM and SPG-ADMM with rates \(\mathcal{O}(1/t)\) and \(\mathcal{O}(1/\sqrt{t})\), respectively. We implement PG-ADMM and SPG-ADMM on two different, but equivalent, consensus formulations and examine the effect of the underlying network topology on their convergence rate.