# Ensemble of sparse graphs with scale-free degree distribution that maximizes Gibbs entropy

Random (geometric) graph models are an important tool to model and understand real-world networks. Many of such networks share the important property of having a scale-free degree distribution. Ideally, to model networks with a given degree sequence, we want to construct an ensemble of graphs with this degree sequence, such that each graph has equal probability in the ensemble. In other words, given the degree sequence, the ensemble is maximally unbiased. This is equivalent to requiring the ensemble to maximize the Gibbs entropy under the constraint given by the degree sequence.

The entropy maximization approach has been successfully applied to construct unbiased ensembles of graphs with a given, fixed, degree sequence (configuration model) or a sequence of expected degrees (exponential random graphs). In addition, this approach was to used to establish the important relationship between clustering and geometry in networks. Yet, given a real network snapshot, one cannot usually trust its (expected) degree sequence as some ``ultimate truth'' for a variety of reasons, including measurement imperfections, inaccuracies and incompleteness, noise and stochasticity, and most importantly, the fact that most real networks are dynamic both at short and long time scales. Hence a more reasonable constraint is to require the distribution of degrees to converge to a given limit.

In 2011, Chatterjee et al. characterized the limit of sequences of dense graphs, with a given degree distribution. They showed that the limit is given by a graphon that is determined via an equation involving the distribution function. A result from Janson in 2013 proves that the entropy of dense graphons converges to the Gibbs entropy of the corresponding ensemble. This result implies that the ensemble given in the Chatterjee paper maximizes the Gibbs entropy under the constraint of the degree distribution and hence solves the maximum entropy problem for dense graphs with a given degree distribution. However, until this moment no result is know for the case of sparse graphs, where the average degree converges to a constant.

We present the Hyper Soft Configuration Model (HSCM) that generates sparse graphs with a scale-free degree distribution and maximizes the entropy. The HSCM is based on random graph models in the context of graphexes and graphons for sparse graphs. Although it inherently does not use any geometry, HSCM is the infinite temperature limit of the Hyperbolic Geometric Random Graph model (HGRG). Given a $\gamma > 1$, we define the measure $\mu(x) = e^{\gamma x}$ and a sequence of growing intervals $A_n \subseteq A_{n + 1} \subseteq \mathbb{R}$ which cover $\mathbb{R}$. To construct a graph of size $n$, the HSCM samples points $x_1, \dots x_n$ from $A_n$ with respect to the probability density $\mu_n = \mu|_{A_n}/\mu(A_n)$ and then connects each pair of nodes $(i, j)$, independently, with probability $W(x_i, x_j) = (1 + e^{x_i + x_j})^{-1}$. By selecting the scaling of our intervals in a similar way as the growth of the radius in the HGRG, we can create sequences of graphs such that the degree distribution converges to a mixed Poisson distribution, where the mixing parameter is a Pareto with exponent $\gamma$. As a result, the limit distribution is scale-free. In addition, the connection probability $W$ is chosen such that it maximizes the graphon entropy for each $n$. We then extend techniques from the work by Janson to prove that the HSCM graphon entropy converges to the Gibbs entropy.