Let me first start with a challenge: Understanding is a poor Substitute for Convexity, says NN Taleb, a Professor in Risk Engineering, a former quantitative trader and pointed criticizer of, what I call, naive risk management. I enjoyed reading his books Fooled by Randomness and The Black Swan. Not because I fully agree but they have original thoughts ... The recently published new book Antifragile: Things that Gain from Disorder is on my desk now ...
I say challenge, because I do not think this describes a theory or a model of models, however ...
Taleb points out: Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, "aim"), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.
From his 7 rules, I totally agree to the "Option" rule and "Practice-First" rule.
For optionality, I can recommend a generalized evolutionary prototyping approach, as we apply it in software development for a long time now and the practice-first rule leads me to think of the spiral of mathematical innovation, that I describe as abstraction-reconcretization spiral - you apply a theorem in examples, find a new abstraction, formulate a new theorem, apply it in example and so on.
The greatest common devisor GCD of two integers n and m is "the largest of all integers i dividing n and m without leaving a remainder". From examples you find out that this is also "the maximum of all integers i dividing n and n-m ...". And applying this recursively you will most probably find the Euclidean algorithm.
Then you might find out that this algorithm can be extended to polynomials (experimenting with linear combinations ...) or even more general other instances of Euclidean Rings and so on.
This might not be acceptable as Practice? A deeper look into the language of mathematics formulated in predicate logic (object mathematics) shows that it requires "links" to models to formulate "Theorems".
And I like one of the core insights of speculative philosophy: "If we do not know more properties of a real behavior than those of our models approximating it - our models ARE reality" (my compilation).
Pointedly speaking, everything is mathematics or programming?
In Mathematica one is guided to follow the compute and develop paradigm. And it has a lot of features to support explorative, experimental and evolutionary abstraction-reconcretization spirals. You can stretch and expand Mathematica's computational knowledge base ... into your domain, cross-domain, ...
IMO, this workflow helps us to get more upside than downside - we can call it Convexity?