Law of effect
From Wikipedia, the free encyclopedia
The law of effect is a principle of psychology described by Edward Thorndike in 1898[1]. The version of the law before 1930 stated that responses to stimuli that produce a satisfying or pleasant state of affairs in a particular situation are more likely to occur again in the situation. Conversely, responses that produce a discomforting, annoying, or unpleasant effect are less likely to occur again in the situation. Around 1930, Thorndike shortened the law of effect to simply state that responses to stimuli that produce a satisfying or pleasant state of affairs in a particular situation are more likely to occur again in the situation.
The law is important in understanding learning, especially as it relates to operant conditioning. However its status is controversial. Particularly in relation to animal learning, it is not obvious how to define a "satisfying state of affairs" or an "annoying state of affairs" independent of their ability to induce instrumental learning, and the law of effect has therefore been widely criticized as logically circular. In the study of operant conditioning, most psychologists have therefore adopted B. F. Skinner's proposal to define a reinforcer as any stimulus which, when presented after a response, leads to an increase in the future rate of that response. On that basis, the law of effect follows tautologically from the definition of a reinforcer.
In an influential paper of 1970[2], R. J. Herrnstein proposed a quantitative relationship between response rate (B) and reinforcement rate (Rf):
B = k Rf / (Rf0 + Rf)
where k and Rf0 are constants. Herrnstein proposed that this formula, which he derived from the matching law he had observed in studies of concurrent schedules of reinforcement, should be regarded as a quantification of the law of effect. While the qualitative law of effect may be a tautology, this quantitative version is not.