FREE lectures this July. Places booking out fast. HSC: book here. VCE: book here.

June 25, 2019, 10:02:55 pm

### AuthorTopic: Schedules of reinforcement?  (Read 1421 times) Tweet Share

0 Members and 1 Guest are viewing this topic.

#### #J.Procrastinator

• Victorian
• Trendsetter
• Posts: 165
• Respect: 0
##### Schedules of reinforcement?
« on: October 23, 2013, 06:19:18 pm »
0
Can someone please explain to me the different types of reinforcement? What is a schedule of reinforcement exactly? And the order of which is the most effective to least effective?

Examples would be great also! Thanks!
2015-2017: Bachelor of Science @ UoM

#### brenden

• Honorary Moderator
• Great Wonder of ATAR Notes
• Posts: 7145
• Respect: +2500
##### Re: Schedules of reinforcement?
« Reply #1 on: October 23, 2013, 07:08:45 pm »
+2
Negative reinforcement: aiming to increase the likelihood of a behaviour occurring again through removing an unpleasant stimulus

Positive reinforcement: aiming to I crease the likelihood of a behaviour occurring again through the addition of a pleasant stimulus.

Fixed ratio schedule:  a schedule of reinforcement whereby a reinforcement is given for a certain number of responses that is unchanging.
So it's "fixed", as in fixed into place, or unchanging, and it's a ratio, right? So for every 5 presses of a button, the mouse gets 1 pellet. 5:1 is the ratio.
Fixed interval: also unchanging, but the reinforcement is given after a a certain about of time elapses. So for every hour someone works, they get paid $20. The hourly rate doesn't change, but there's a reinforcer given based upon time. Variable ratio: a ratio of reinforcement that is unpredictable and changing. It normally has an average number of responses IIRC. So, you might reinforce he mouse after only one button press, which would be a 1:1 ratio, but then you wait anther 99 button presses before the next reinforcement (99:1 ratio). Ultimately the average is 50:1 (one reinforcer per 50 responses ON AVERAGE). This sets it apart from a random ratio, which has no average. Variable interval: unpredictable amount of time between reinforcers. Think of fishing. You might get a fish after 10 minutes, maybe after an hour. Maybe after 2 hours. The fish is the reinforcer, here, and it comes at a variable time. Notice how I've grouped them. It goes-- Unchanging: Ratio Time Changing: Ratio Time (FYI continuous reinforcement is reinforcing the behaviour every time it happens) Continuos reinforcement is best for establishing a behaviour, but not good for maintaining it. From here on, I'm not 100% sure, so I'll leave it to someone else rather than fuck you up for your exam. ✌️just do what makes you happy ✌️ #### #J.Procrastinator • Victorian • Trendsetter • Posts: 165 • Respect: 0 • School Grad Year: 2014 ##### Re: Schedules of reinforcement? « Reply #2 on: October 23, 2013, 08:28:29 pm » 0 Negative reinforcement: aiming to increase the likelihood of a behaviour occurring again through removing an unpleasant stimulus Positive reinforcement: aiming to I crease the likelihood of a behaviour occurring again through the addition of a pleasant stimulus. Fixed ratio schedule: a schedule of reinforcement whereby a reinforcement is given for a certain number of responses that is unchanging. So it's "fixed", as in fixed into place, or unchanging, and it's a ratio, right? So for every 5 presses of a button, the mouse gets 1 pellet. 5:1 is the ratio. Fixed interval: also unchanging, but the reinforcement is given after a a certain about of time elapses. So for every hour someone works, they get paid$20. The hourly rate doesn't change, but there's a reinforcer given based upon time.

Variable ratio: a ratio of reinforcement that is unpredictable and changing. It normally has an average number of responses IIRC. So, you might reinforce he mouse after only one button press, which would be a 1:1 ratio, but then you wait anther 99 button presses before the next reinforcement (99:1 ratio). Ultimately the average is 50:1 (one reinforcer per 50 responses ON AVERAGE). This sets it apart from a random ratio, which has no average.
Variable interval: unpredictable amount of time between reinforcers. Think of fishing. You might get a fish after 10 minutes, maybe after an hour. Maybe after 2 hours. The fish is the reinforcer, here, and it comes at a variable time.

Notice how I've grouped them.

It goes--

Unchanging:
Ratio
Time

Changing:
Ratio
Time

(FYI continuous reinforcement is reinforcing the behaviour every time it happens)

Continuos reinforcement is best for establishing a behaviour, but not good for maintaining it.

From here on, I'm not 100% sure, so I'll leave it to someone else rather than fuck you up for your exam.

That's just what I needed. Thank you so much!
« Last Edit: October 23, 2013, 08:30:30 pm by #J.Procrastinator »
2015-2017: Bachelor of Science @ UoM

#### rebec28

• Victorian
• Posts: 9
• Respect: 0
##### Re: Schedules of reinforcement?
« Reply #3 on: October 30, 2013, 09:12:45 pm »
0
Which is more resistant to extinction? I would have thought variable interval but some people are saying variable ratio. Why?

#### Aelru

• Victorian
• Forum Regular
• Posts: 71
• etc. etc.
• Respect: 0
• School: Some secondary college off Albert Bandura's nose.
##### Re: Schedules of reinforcement?
« Reply #4 on: October 30, 2013, 09:35:56 pm »
0
Which is more resistant to extinction? I would have thought variable interval but some people are saying variable ratio. Why?

I'm quite interested in this question as well.
2012: [Methods]
2013: [Psychology][Specialist][Chemistry][English][Health&Human Development]

#### rebec28

• Victorian
• Posts: 9
• Respect: 0
##### Re: Schedules of reinforcement?
« Reply #5 on: October 31, 2013, 05:12:00 pm »
0
I think it's because variable ratio is quite resistable to extinction AND has a steady and high rate of responses so an all time favourite

#### IvanJames

• Victorian
• Trailblazer
• Posts: 25
• Respect: 0
##### Re: Schedules of reinforcement?
« Reply #6 on: October 31, 2013, 05:30:00 pm »
+1
Which is more resistant to extinction? I would have thought variable interval but some people are saying variable ratio. Why?

Think of gambling, which is an example of a variable ratio schedule of reinforcement. Gamblers just think "Maybe the next pull of the lever will be a win", so they continue to act on the stimulus.

With variable intervals, the person has no control over when the reinforcement comes.

Basically, with variable ratio the organism believes they have some control over when the reinforcer comes, so it is more resistant to extinction. Variable intervals is completely up to time.

I hope that kind of helps.

#### spectroscopy

• National Moderator
• Part of the furniture
• Posts: 1961
• Respect: +354
##### Re: Schedules of reinforcement?
« Reply #7 on: October 31, 2013, 06:38:08 pm »
0
its variable-ratio most resistance to extinction 100% it was a question on a sac and alot of people messed it up

#### KingofDerp

• Victorian
• Forum Regular
• Posts: 84
• Respect: +1
• School: Luther college
##### Re: Schedules of reinforcement?
« Reply #8 on: October 31, 2013, 07:23:17 pm »
0
its variable-ratio most resistance to extinction 100% it was a question on a sac and alot of people messed it up

yes because the organism is always anticipating the desired stimulus or consequence.
2012 Literature - 37 raw (A+ on the exam)
2013 Aims- Further (35)  Studio Art (45+) English (45+) Japanese SL (30) Psychology (40)
Tutoring English in 2014 \$30/ph (negotiable) Also tutored lit this year and am willing to help  you get into 35+ Lit score uni courses for a much lower rate ~~^.^~~