Abstract: One goal at the interface of topological data analysis and machine learning is to “tune” the topology of a network to that of the data. To that end, we report substantial progress on understanding the topology of ReLU networks. We study the canonical polyhedral complex, recently defined by E. Grigsby and K. Lindsey, which encodes a ReLU network’s decomposition of input space and thus its decision boundary. We find that while the polyhedral complex is composed of arbitrary combinatorial types, generically its geometric dual is a cubical complex. We call this the sign sequence cubical complex, and establish additional algebraic structure on it, extending similar structure from the theory of hyperplane arrangements. We use this to show that the locations and sign sequences of the vertices of a network’s polyhedral complex, which can be computed recursively through network layers, fully determine the complex combinatorially and topologically. Computing the polyhedral complex by taking advantage of this structure is robust to floating point errors which can arise through standard approaches to polyhedral intersection, giving an effective algorithm to fully encode decision boundaries. Running empirics preliminarily indicates that the distribution of topological properties of shallow networks’ decision boundaries at initialization is roughly constant as width varies, but those topological properties vary with width for deeper networks.