Programming teachers, being programmers and therefore formalists, are particularly prone to the ‘deductive fallacy’, the notion that there is a rational way in which knowledge can be laid out, through which students should be led step-by-step. One of us even wrote a book which attempted to teach programming via formal reasoning. Expert programmers can justify their programs, he argued, so let’s teach novices to do the same! The novices protested that they didn’t know what counted as a justification, and Bornat was pushed further and further into formal reasoning. After seventeen years or so of futile effort, he was set free by a casual remark of Thomas Green’s, who observed "people don’t learn like that", introducing him to the notion of inductive, exploratory learning.
There is a vast quantity of literature describing different tools, or as they are known today Interactive Development Environments (IDEs). Programmers, who on the whole like to point and click, often expect that if you make programming point-and-click, then novices will find it easier. The entire field can be summarised as saying "no, they don’t."
This paper starts out with the amusing factoid that normal measures of intelligence and motivation are absolutely lousy predictors of computer programming ability.
Good programmers, this paper says, are good because they have a deep understanding of how their language and machine work. They have a "mental model" of how the compiler and CPU work, and can test potential code in their head against this model, before writing it. And when it doesn't work, they know what assumptions they made about the compiler and the machine, so they can check those assumptions and see where they went wrong.
Apparently natural language aptitude helps you be a good programmer too. It's hardly surprising to hear that someone who's good at translating their thoughts to verbal language is also good at translating their thoughts to more formal, structured programming languages. Also, good programmers often tend to think "bottom up" - considering the individual problems that need to be solved first, before thinking about the overall structure of the program.
I haven't had time to read the whole paper, but to me these conclusions suggest that perhaps we need to start programmers learning by teaching them (at a high level) how a CPU works, then moving on to some kind of ultra-simplified assembly language, then move to a very simple course on how a compiler works, and only then introduce them to some kind of more structured programming language. The aim of all this being to encourage them to develop that mental model of how things are working underneath the hood, and allowing them the necessary time to build up the layers of complexity while still having a good understanding of each layer. Also, a more scientific style "identify assumption - test assumption - check result" method of debugging might be usefully taught.