The concept of "emergence" has become commonplace in the modelling of complex systems, both natural and man-made; a functional property" emerges" from a system when it cannot be readily explained by the properties of the system’s sub-units. A bewildering array of adaptive and sophisticated behaviours can be observed from large ensembles of elementary agents such as ant colonies, bird flocks or by the interactions of elementary material units such as molecules or weather elements. Ultimately, emergence has been adopted as the ontological support of a number of attempts to model brain function. This manuscript aims to clarify the ontology of emergence and delve into its many facets, particularly into its "strong" and "weak" versions that underpin two different approaches to the modelling of behaviour. The first group of models is here represented by the "free energy" principle of brain function and the "integrated information theory" of consciousness. The second group is instead represented by computational models such as oscillatory networks that use mathematical scalable representations to generate emergent behaviours and are then able to bridge neurobiology with higher mental functions. Drawing on the epistemological literature, we observe that due to their loose mechanistic links with the underlying biology, models based on strong forms of emergence are at risk of metaphysical implausibility. This, in practical terms, translates into the over determination that occurs when the proposed model becomes only one of a large set of possible explanations for the observable phenomena. On the other hand, computational models that start from biologically plausible elementary units, hence are weakly emergent, are not limited by ontological faults and, if scalable and able to realistically simulate the hierarchies of brain output, represent a powerful vehicle for future neuroscientific research programmes.