Login | Register
Enrol now for our new online tutoring program. Learn from the best tutors. Get amazing results. Learn more.

Welcome, Guest. Please login or register.

December 03, 2021, 03:32:33 pm

Author Topic: UNSW Course Reviews  (Read 163488 times)  Share 

0 Members and 2 Guests are viewing this topic.


  • Adventurer
  • *
  • Posts: 11
  • Respect: +16
Re: UNSW Course Reviews
« Reply #270 on: September 02, 2021, 12:39:51 pm »
Subject Code/Name: COMP3141 - Software System Design and Implementation

Contact Hours: Just 2x 2 hour lectures, but there is a split:
- One of them is a content lecture which introduces the course content for the week
- The other is a practice lecture, which covers the solution to the previous weekís programming exercise as well as reinforcing the material from the content lecture, usually with a focus on working through actual problems

Assumed Knowledge: Either COMP1927 or COMP2521.

- 2x programming assignments, worth 20% of your mark (10% each)
- 8x online quizzes, worth 10% of your mark (be warned, these are very difficult!)
- 6x weekly programming exercises, worth 20% of your final mark
- Final exam, worth 50% of your mark, with a hurdle of 40/100 in the exam to pass overall

Lecture Recordings? Yes, recorded and uploaded onto YouTube.

Notes/Materials Available: Not much, but what you do get is nice: a set of good lecture slides and some tutorial questions.

Textbook: No textbook required, but the following are recommended as Haskell references by the course if you're looking for something:
- Thinking Functionally with Haskell, by Richard Bird
- Haskell Programming From First Principles by Christopher Allen and Julie Moronuki
- Programming in Haskell by Graham Hutton
- Real World Haskell by Bryan O'Sullivan, Don Stewart, and John Goerzen
- Learn You a Haskell for Great Good! by Miran Lipovača

Additionally, the course content draws from Data Refinement: Model-Oriented Proof Methods and their Comparison by Kai Engelhardt and W.P. de Roever, but itís said that this text is not suited for undergraduates.

Lecturer(s): Dr. Christine Rizkallah and Curtis Millar (who have both now left UNSW)

Year & Trimester of completion: 21T2

Difficulty: 4.5/5 without functional programming experience, 3/5 if youíve done some before

Overall Rating: 5/5

Your Mark/Grade: 93 HD

Most people treat this as ďthe Haskell courseĒ because there is a fair bit of Haskell programming, but thatís not its stated intention. Rather, it provides a perspective on how we can use ideas inspired by mathematical proof and reasoning to construct safe software: Haskell just so happens to be a good language for applying this theory. The true value of this course is in the appreciation it gives you for safety and reasoning about programs, which is a point that some people (typically the more applications-focused crowd, though I mean this in the nicest way) can miss, because a lot of the stuff in this course can come across as abstract nonsense. It really does force you to examine how you previously approached correctness and take a more principled approach to designing software, not only during the development process but also the testing process. On its own this is a very useful perspective to have, and makes this worthy of consideration as an elective for CS students (worth noting this is core for Software Engineering students, and I definitely agree with that).

If you havenít done functional programming before, this course will probably make you feel like youíre relearning programming, which is entirely normal. You donít have to write very much code at all in this course in terms of the number of lines needed to finish most tasks, but the tradeoff is that youíll be thinking much harder about each line than youíve probably ever done up until this point. To supplement all of this programming, there is some theory regarding types and the connection between programs and proofs, which is probably the coolest bit of the whole course (the surprise is ruined if youíve seen it before as I did though). Structural induction and natural deduction are also taught as it relates to that theory. While this is a bit of maths, donít worry - very few courses are necessary nor sufficient to have already covered it going into the course, so it gets taught from scratch.

Overall, quite a fun course if a bit tough at times. This is a must do if youíre interested in functional programming, since this course offers the most substantial introduction to the area at UNSW. If you like this course, consider following it up with COMP3161, which offers an analysis of programming language design (and particularly the design of functional programming languages).


  • ATAR Notes Lecturer
  • Honorary Moderator
  • Great Wonder of ATAR Notes
  • *******
  • Posts: 8811
  • "All models are wrong, but some are useful."
  • Respect: +2568
Re: UNSW Course Reviews
« Reply #271 on: September 02, 2021, 04:43:32 pm »
Subject Code/Name: MATH5845 - Time Series

Contact Hours: 2 x 2hr lecture (no tutorials; some 'tute' questions covered in the lecture)

Assumed Knowledge: None explicitly stated, but just as with all level 3/5 statistics courses you should have foundation up to second year statistics (MATH2801/2901 level). Knowledge of linear models (MATH2831/2931) is highly recommended for one topic, but it only matters for that topic, and you only need to understand the linear model itself (don't worry about F-tests etc). MATH3801/3901/5901 not required,

- 1 x 15% Assignment
- 1 x 20% Assignment
- 5% Class participation
- 60% Final exam

Lecture Recordings? Yes

Notes/Materials Available: Detailed lecture notes and lecture scribbles are given. Excerpts from textbooks given.

- Shumway, R.H. and Stoffer, D.S. (2016) Time Series Analysis and Its Applications with R Examples, 4th edition, Springer-Verlag, New
- P. J. Brockwell & R. A. Davis (2002), Introduction to Time Series and Forecasting, Second Edition, Springer-Verlag, New York.
They're both good reads, but not needed.

Lecturer(s): Dr. Zdravko Botev

Year & Trimester of completion: 21 T2

Difficulty: 4.5/5

Overall Rating: 4.5/5

Your Mark/Grade: 93 96 HD

This is one of many postgraduate statistics courses. Recently, it has remained on a yearly offering.

Time series branches off from stochastic process. It is the analysis of data that is indexed by a time variable. Time is assumed discrete in time series, because in practice although the phenomena may be continuous, you only collect it at discrete time intervals. In practice your time series data can be quite long (collect data over lots of timestamps), but you only study the data set itself. There is no comparison between two time series in this course.

The first thing to mention is that this is a Zdravko course. He teaches you the theory. It's more appropriate to think of this course (at least presently) as Theory of Time Series. You'll be introduced autocovariance/autocorrelation, ARMA, spectral densities, etc. all from a mathematical standpoint. Of course, there are a couple questions that make you apply the theory to solving real problems/on real data sets, e.g. maximum likelihood of the ARMA parameters. For someone like me, this is exactly what I want. Yet somebody who only cares about applications may not be so interested.

The first half of the course introduces the mathematical background (including autocorrelation; quite surprisingly huge) needed for time series algorithm. The second half focuses on time series concepts, and develops the algorithms that typically get implemented for time series analysis.

Class participation is free marks - just contribute once (question OR answer) and you walk away with 5%. Quizzes are mostly free marks as well. Basically, the question bank gets released, and one question gets randomly selected for which you have to submit a response for. We had at least 1 week to prepare our answer for both the quizzes. The difficulty really comes from the final exam in my opinion (up till then, difficulty is something like 2.5/5).

In short, I just felt there was no time to answer everything. It was nice to know that out of the 4 questions given, we only needed to answer 3 such questions. Somehow, one of the three I picked was way too long. I remember submitting the exam with 26 or so seconds to spare; zero time to actually check my answers.

In terms of the coding, Zdravko supports at least Matlab, R, and Python. Choose any one of the three, and roll with it. (However, his live coding is in Matlab, because that's what he's more comfortable with.)

Despite being a theoretical course though, I would at least ask many postgrad students "why would you skip time series"? It's still pretty fundamental to know, in my opinion, as a working statistician. (Time series is also used in ML apparently, but I haven't investigated how.)
« Last Edit: September 11, 2021, 02:02:07 pm by RuiAce »


  • Adventurer
  • *
  • Posts: 11
  • Respect: +16
Re: UNSW Course Reviews
« Reply #272 on: September 02, 2021, 05:00:36 pm »
I originally wasnít going to post this review since another poster gave an excellent summary, but I have been persuaded...

Subject Code/Name: MATH2901 - Higher Theory of Statistics

Contact Hours: 2x 2 hour lectures, 1x 1 hour tutorial

Assumed Knowledge: Formally, only one of MATH1231, MATH1241, MATH1251 or DPST1014 is required.

However, I would probably recommend you to have done MATH2011 or MATH2111 as well, not just because some of the content from it appears, but also because youíll benefit from the mathematical maturity.

- Quiz, worth 5%
- Midterm quiz, worth 20%
- Written assignment, worth 15%
- Final exam, worth 60%

Lecture Recordings? Yes, on Blackboard Collaborate.

Notes/Materials Available: Nothing too impressive here: some mediocre lecture slides that had typos in them here and there, some tutorial problems with solutions and some notes on how to use R (which were actually very good).

Thereís a very comprehensive set of course notes floating around from a previous lecturer of the same quality as the R notes, although you had to find these on your own as they werenít provided.

Textbook: Not required, but Introduction to Mathematical Statistics by Robert Hogg may be helpful.

Lecturer(s): Dr. Donna Mary Salopek

Year & Trimester of completion: 21T2

Difficulty: 4/5

Overall Rating: 0.5/5 - harsh, but unfortunately deserved in my eyes

Your Mark/Grade: 84 DN, which is the most poetic end to this course I could've imagined

Whether you do this course or not, learning some proper statistics beyond what is taught in high school or 1st year maths is always really good knowledge to have. This course is inherently a more applications-focused one, but the first half of this course (probability theory) should appeal to you if youíre more into pure maths as I am. For a variety of reasons, this was the most difficult of the level 2 core maths courses so far for me. You really have to be on top of your game specifically when it comes to algebraic manipulations and calculus, as a lot of the side calculations that aren't even in the realm of probability or statistics anymore are sometimes very nontrivial. Thereís also a lack of any real intuition for the inference half of the course, so prepare for a lot of rote and having to take a bunch of things on faith.

What absolutely tanks the rating of this course is the teaching and organisation side, which was fairly disappointing this term compared to what Iíve heard about previous offerings. I truly could rant without end: the midterm, the assignment, the exam, basically anything. You name it, there was probably something wrong with it. A lot of what I have to say is just going to be straight shade though, so I wonít comment on specifics (I also donít want to rehash the points mentioned by a previous review from this term). That should give you enough of an indication as to my thoughts, and I can assure you that this isnít just a personal thing - the disappointment seems universal amongst those who did the course this term.

It borders on cliche at this point that so many statistics courses at educational institutions around the world end up being not great, and cases like this certainly do not help with that stereotype (at least as far as UNSW is concerned). I already wasnít planning to do any further statistics courses after this, but I could certainly see how this would leave a sour taste in the mouths of those who are on the fence about their major and potentially turn them away, which is always a shame. If Donna is going to take this course again, she definitely has much room for improvement, and I really hope she reads the constructive feedback she has been given this term and tries to take some of it on board.


  • Fresh Poster
  • *
  • Posts: 1
  • Respect: 0
Re: UNSW Course Reviews
« Reply #273 on: September 08, 2021, 11:07:30 am »
Subject Code/Name: COMP6771 - Advanced C++

Contact Hours:  2 x 2h Lectures (delivered over Youtube Live), 1 x 1h Tutorial (delivered over Zoom)

Assumed Knowledge: Formally, COMP2511 (only basic OOP concepts such as inheritance and polymorphism are drawn from this course)

- Assignment 1: STL containers/algorithms (15%)
- Assignment 2: Operator overloading, OOP (25%)
- Assignment 3: Templates and iterators (30%)
- Final exam: 3 hours online, 2 programming exercises (30%)

Lecture Recordings?  Yes, all lectures are archived on Hayden's youtube channel (you can watch all of them here, and also includes his other courses, which is great). Hayden's tutorial is also recorded each week.

Notes/Materials Available:  Slides to accompany the lectures are given, however they are sometimes pretty barebones, and often had errors or had not been updated since the previous offering, which is slightly annoying if you rather learn by reading notes rather than watching lectures, like me.

Textbook: Bjarne Stroustrup's textbook is listed as "If we had to point you to a single resource", but don't bother, the recordings/slides along with cppreference.com are more than enough.

Lecturer(s): Hayden Smith (with guest lecturers in week 10 from Optiver)

Year & Trimester of completion: 21T2

Difficulty: 2.5/5

Overall Rating:  4.5/5

Your Mark/Grade: 96 HD

Despite the big self learning component of this course (which is basically wading through cppreference.com and S/O), this course has the best content out of any course I have learned so far. Hayden is a really great lecturer. One of the comments I've seen a few times about him is that "he doesn't seem very knowledgeable since he googles stuff in the lectures". I actually think this aspect of his C++ teaching is good - most of the time in this course you will spend navigating online C++ library specifications etc, and seeing him use these websites and picking up on the things he looks for when seeking an answer will become very relatable as students complete the assignments. The forum support (edstem) is excellent, shoutout to one of the tutors Nathaniel.

I really like the way the language is presented in the lectures, and students can immediately see the advantages of C++ over other languages they will have previously learned in CS at UNSW such as Java and C. One thing I wasn't aware of initially was that the course is more of a "C++ design course", in other words how to write C++ in a "correct" way (as there are many ways to do things in this language). The assignments are tailored towards this idea - rather than getting students to build cool applications in C++, the assignments are a means to reinforce good C++ practices. This did slightly get on my nerves a bit with the assignment marking though - different tutors marked assignments differently, and it seemed there wasn't much of a consistent marking criteria in some places, which became very obvious as I talked to other friends taking the course.

The assessment structure, being heavily assignment loaded and only a 30% exam, is a big plus IMO, and I think more CS courses should move to this model (if they haven't already). This is pretty rough however if you have a heavy workload term with other assignment loaded courses. The first question of the exam doesn't really test the learning done throughout the course, which was mainly from learning about and leveraging C++ features to complete the assignments. Instead it was an algorithmic type question that students could have completed before ever taking the course using C knowledge. The second question specification was a bit poor, many important details were left in a footer at the bottom, which took me a while to read as I was trying to decipher the overall question (thankfully these were only small parts in terms of marks). Despite this, I personally I thought the exam was reasonable, and could have done better if I was well slept and/or better prepared, however it was tight, and many students did not finish (or get close to finishing).
« Last Edit: September 08, 2021, 11:15:31 am by cherloire »


  • Trailblazer
  • *
  • Posts: 26
  • Respect: +34
Re: UNSW Course Reviews
« Reply #274 on: November 18, 2021, 04:13:15 pm »
Subject Code/Name: ECON2127 - Environmental Economics

Contact Hours:  2 x 1.5 hour lecture per week. 1 x 1.5 hour tutorial per week.

Assumed Knowledge:  ECON1101. This will probably change next year to ECON2101, as the lecturer is considering making this a third-year elective. Even if it stays as it is, take 2101 and consider taking 2112. It'll make this course a breeze. Taking something like 3106 before as well made this course revision for the most part.


10% Tutorial Questions. They chose two or three problem sets we had to submit at the start of the tutorial. You can also just submit every problem weekly if you're not into that sort of thing.

20% Midterm. Nothing super difficult, just stay on top of the tutorial questions. Average was in the 70's.

2x10% Assignments. These were a little more difficult and had more parts that the tutorial problems. These just served as exercises to extend on previous tutorial problems and lecture material.

50% Final Exam. Questions similar in difficulty to the midterm, but focused on the latter half of the course.

Lecture Recordings? Full lecture and tutorial recordings available.

Notes/Materials Available:  Full slides and textbook chapters available.


Dr Tess Stafford, 4.5/5. Tess sat in a two-hour consultation call every single week for whoever wanted to pop in and ask questions, which says enough about her - she's great to learn from.

Year & Trimester of completion: 2021/T3

Difficulty: 1/5.

Overall Rating:  5/5.

Your Mark/Grade: TBA

Comments: A nice and chill final econ elective to round out an otherwise hectic term. Would recommend to any econ student who is even mildly interested in the subject. I've also heard good things about its little sister ECON1107 (which I think some people doing environment degrees can use as an elective?), so consider that as well.
Studying Economics/Mathematics @ UNSW


  • Trailblazer
  • *
  • Posts: 26
  • Respect: +34
Re: UNSW Course Reviews
« Reply #275 on: November 22, 2021, 10:40:50 pm »
Subject Code/Name: ECON3208 - Applied Econometric Models

Contact Hours:  2 x 1.5 hour lecture per week. 1 x 1.5 hour tutorial per week.

Assumed Knowledge:  ECON2206 (or be enrolled in a Data Science degree and take MATH2831).


2x15% Assignments. We were given a dataset and a stata file, and a sheet of questions to answer. These questions were then tested in a multiple choice moodle quiz. A bit strange, but nothing difficult.

25% Group Project. This is an 8 page empirical research paper where we were given a dataset used in a paper, and then asked to answer the same question as the paper using the techniques described in lectures. They randomly assigned the groups within tutorials, or you could choose to do the project by yourself. 5% is from a team assessment, so if you've got a bad group you can flame them there.

45% Final Exam. 50 multiple choice questions in 2.5 hours.

Lecture Recordings? Full lecture and tutorial recordings available.

Notes/Materials Available:  Full slides provided.


Mike Keane, 3/5. Mike taught the start and end of the course. His slides were a bit dense, but alright overall.

Fanghua Li, 4/5. Fanghua was pretty good for this course, can't really have asked much from her.

Year & Trimester of completion: 2021/T3

Difficulty: 4/5.

Overall Rating:  3/5.

Your Mark/Grade: TBA

Comments: A lot of statistics courses get (deservedly in my opinion) a bad rap for being deliberately obfuscatory and hard to follow. I can't really say the same about this course. Sure, it still has difficult content that takes time to wrap your head around, but it never felt like there was a need for the big conceptual leaps and blind acceptances of theorems that I felt were present in ECON2206. This course doesn't hold your hand, but it takes time exploring the big ideas before launching into a more in-depth exploration of the topics.

This course is essentially an extension to ECON2206, where you spend most of your time patching up the holes left behind by that course. Most of the lectures start with the premise of "here's something wrong with a particular regression, how can we fix it?", and then take a very logical path in ruling out what can and can't be done to fix that issue. This resulted in a course that felt a bit disjointed and lacking an identity of its own - you're constantly going back and forward between issue and solution, and not really considering if that solution would bring any issues as well.
« Last Edit: November 30, 2021, 05:43:20 pm by HelpICantThinkOfAName »
Studying Economics/Mathematics @ UNSW


  • MOTM: AUG 18
  • HSC Moderator
  • Part of the furniture
  • *****
  • Posts: 1039
  • All doom and Gloom.
  • Respect: +686
Re: UNSW Course Reviews
« Reply #276 on: November 25, 2021, 02:41:55 pm »
Subject Code/Name: COMP2511 - Object-Oriented Design & Programming

Contact Hours:
2 x 2hr lectures
1 x 3hr tutlab

Assumed Knowledge:
Prerequisite: COMP1531 AND (COMP2521 OR COMP1927)

Assignment - 15%
Project - 35% (3 milestones over 4 weeks)
-- milestone 1 + 2 given two weeks, worth 17.5%
-- milestone 3 also given two weeks, worth 17.5%
Class Mark (Tutorials + Labs) - 10%
Final Exam - 40%

Lecture Recordings?

Notes/Materials Available:
Slides and tutor notes, lab exercises

Some suggestions for books that cover at least some of the topics in this course
Head First Design Patterns, by Elisabeth Freeman and Kathy Sierra, The State University of New Jersey
Refactoring: Improving the design of existing code, by Martin Fowler

Ashesh Mahidadia

Year & Trimester of completion:


Overall Rating:
-2/5 (adjusted from 0 pre-exam)

Your Mark/Grade:
after finals

The course was pretty okay for the first half - new raccoon (Refactoring Guru > Tom Nook), relatively tame assignment (albeit verbose and frustrating to work through, relatively easy to shrug off. I didn't like how long it was for the purpose it served; an intro to Object Oriented programming. It was easy but too long.), occasionally unnecessarily long lab tasks, etc. I could get past that, but then the course went to shit when the project happened. I'm going to also add that the labs that ran during the project were really long so learning stuff from the labs was lost on the students because of how bad the project ended up being. Retrospectively these only really served their purpose as study material for the exam, and as such the labs were often put out of context with the lectures at the time.

For context, we were told that the automarking process (which wasn't a thing in the previous offering of the course) was needed to ensure greater breadth in testing the correctness of students projects, which in turn awards fairer marks, particularly to those who completed more work. The only problem with these intentions (which I fully support, as they do make logical sense) was the execution was mindbogglingly poor, and the execution didn't achieve either of the objectives I've listed (correctness + fair marks) to varying extents, which will both be addressed below. There are also certain other reasons that I think are potentially partially responsible for the poor execution, but I won't go into those in depth because aren't as pertinent to the course itself as the following reasons. Just touching on them however I think is okay though -- it often felt like there could have been more hands-on support from course administration, especially when the course was in fact going awry but there wasn't for whatever reason (extra work, other commitments etc.). Nitpicking slightly, the announcements were sometimes inconsistent (ie. we won't give you X input / we won't test you on Y case, then those events actually happened, stuff like that).

But anyway, the main spiel:
From the start, the timeline that seemed to be employed should've rung alarm bells. Two weeks per milestone is not bad, though more time is preferable. But when the assignment is split like it is, and the second "half" of the assignment depends hugely on the first "half" (the whole point of the second bit is how well your design in the first adapts to new criteria. To quote the project specification: "60% of your [Milestone 3] automark will come from testing a completed interface which includes all the requirements in Milestone 2, and incorporation of the following new requirements (1.1 to 1.4).") it's imperative that students get feedback really quickly. There are two weeks between the two due dates, and as such two lab sessions. However, due to the structure of the course, we demonstrate our product to our tutors in the lab session immediately following the first due date and receive feedback in the next. Depending on when your session is (or if your tutor decides to give feedback outside lab time), the time remaining to act on that feedback for the final product may vary from anywhere between 4-7 days. This is particularly nitpicky but it certainly isn't the worst part, because that title is reserved for the various shenanigans that automarking created. I have no other words to describe automarking other than genuine shit because a) as stated before the execution was awful, b) the process to remedy this was equally if not more awful and c) the automarks (which genuinely could have been released earlier, unless for an even weirder reason the autotesting suite wasn't available before the automarks were released (this would point to admin unpreparedness)) were released really damn late ie. they were released 5 days from the milestone 3 deadline. This course already has an implicitly high workload attached to it, but these late results made us scramble harder (and unnecessarily so, IMHO, since it was in no way our fault), especially since not many of the errors the autotests raised for groups were particularly helpful in pointing out actual flaws in groups' programs. It was genuinely enraging at the time, and even in hindsight, and remaining somewhat level-headed it's impossible to describe it as anything other than a complete shocker. The flow-on effect of this late release and failure to accomplish the initial rationale set for automarking was that despite it being no fault of the students, students had close to no time to fix these non-errors in milestone 2 because of the looming milestone 3 due date. It became a dilemma between working on milestone 3, which relied on the "buggy" milestone 2, or maximising the previous marks and sacrificing milestone 3. For context, you would have been likely to fail other autotests in milestone 3 similar to those in milestone 2. In the end many groups had no choice but to go with the latter option because of the hanging threat.

Now, addressing the remarking process (ie. "b) the process to remedy this was equally if not more awful") -- the initial remark was slated to be returned on the Saturday before the Monday due date, IIRC, which to a student is absolutely outrageous. The amount of organisational disarray would have been ridiculous. We had no dry runs prior to the submission for Milestones 1 + 2 ie. nothing, even the most basic stuff just to ensure we wouldn't fail on technicality rather than incorrectness. This would have prevented a lot of the problems that arose. The official? reason for not providing a dry run was that it'd give away the testing suite, which seemed weird and remains so. A LOT of groups failed on dumb technicalities, and even a remark wouldn't have solved this because there were so many technicalities that a single remark may have solved one only for your group to uncover another. Despite this literally being in no way the students' fault, it was made out to be as if it was. We weren't allowed to "debug" -- but many groups just wanted to fix the technical errors as opposed to logic errors, ie. the ones that the autotests wouldn't facilitate, which weren't even wrong in the first place. In the end, dry runs were released for milestone 3 (any away from the actual testing suite would have been okay for milestone 2) but these ended up being provided two days after the automarks were actually released and were lacklustre at best. They were just the most basic reused milestone 2 tests.

Other issues related to remarking include but aren't limited to:
- The use of a marking cap to allow for small incremental errors/differences between the tests and groups' work, however, this initiative failed for multiple reasons; as stated elsewhere, because of how the autotests ended up running, one reason this failed is that this came off as an implication of a poor specification, rather than assumption variation. The autotests were capped at 80-90 which wasn't particularly helpful at first since a lot of groups initially got way lower than that. I will concede something below
- There was a remarking penalty for "non-atomic changes" which were often necessary for some groups because the set of changes classed as atomic was (somewhat) objectively narrow. This penalty was kept in place even after the shitshow this ended up being, which I personally thought was rather ridiculous (it wasn't even reduced, but I'd like to think it was adjusted slightly behind the scenes, despite the max 20% penalty still being a thing)

I will concede though, that this whole process would have been acceptable had the autotests worked as intended (with a provided dry run, of course) but as it didn't, it just made everything a whole lot worse. Another concession; you did get the highest mark of all the remarks, but this I think pales in comparison to how bad automarking ended up being.

The last point (ie. "a) as stated before the execution was awful"); the biggest problem here was that a lot of the project was open to interpretation, which a lot of the autotests did not factor in. While there was good breadth in testing, what they ended up doing was going into too much depth, thus by definition making assumptions which in many cases conflicted with the more than valid assumptions made by some students. We were told that we should make assumptions and were encouraged to do so where necessary, then we essentially got screwed for doing the exact thing we were told to do ie. basic errors not cleared up by the specification and were fair assumptions ie. no questions required on the forum were causing autotests to screw up, but we didn't know what these "errors" were. We were also told that the autotests would test "lower level / general stuff" and NO edge cases but this was in general not true (some tests fell under the general umbrella of "edge case", others tested higher level stuff where by definition students' interpretation comes into play). A phrase that I saw another student use that encapsulates this whole saga rather well is that "you're allowed to make assumptions, as long as they're also the ones we make", which is frankly ridiculous. If the specification and autotests needed X assumption to pass autotests, these should have been explicitly stated in every case, not just a select few (which I will give *some* credit for) and vaguely elsewhere. I also saw a student say something along the lines of "the project uses design by contract but essentially expects us to defensively program". It's just a shame because overall, autotesting is worth 14% of your OVERALL grade ie. for some rather extreme context, getting 0 for automarking in total can drop you from 100 almost down to a Distinction. It's even more of a shocker when the autotests didn't do their job properly, and even more so when you realise that autotesting was worth more than design in what is fundamentally a software design course (1.33x more, if I recall correctly).

An example of a really bad test that was actually given:
For context, we made a dungeon crawler game. A particular enemy can spawn and has a chance of spawning with armour. That chance is arbitrarily decided by your group. However, there was a test in the automarking suite you could fail if NONE of the first ten of that enemy spawned with armour ie. if you assumed this enemy had a 10% chance of spawning with armour, you'd fail this test roughly 1/3 of the time. This test was purely luck-based, and just statistically favours those who arbitrarily chose a higher chance of armour spawn. Now, this particular test wasn't worth a lot (given the number of tests in the testing suite), but when this sort of thing crops up multiple times across the testing suite, you can imagine the fury of the students. How this particular test was a good idea, I'll never know.

Other pertinent points:
- The response to criticism was passive and slow. Some feedback ran along the lines of "go read the spec", "don't worry about it", etc. There was also a 15m ish window where the course forum had temporarily disabled public posting/commenting, which seemed really strange given the timing (at the peak of the complaints and student anger). Even considering how long it took to get marks, it felt like it took longer to took forever to get responses and feedback on criticism of the automarking process. In short, lack of transparency, stability and communication
- I personally found it weird that no deadline extension was ever on the table (even though many students had made it clear that an extension wouldn't fix things in private circles). The only one afforded to us was the 5hr one for a 5hr GitLab outage in the first submission. I can guarantee that this ended up slowing students for a lot more than 5 hours, even though a deadline extension would have just extended the pain
- Groups with bigger issues that couldn't be resolved by a remarked automark received manual marking, but on a large scale, this was unfeasible. It felt really selective, and I can imagine that a) some groups may not have been bothered anymore and b) many had bigger issues. It would have been better to have executed this properly the first time given the problems that have existed in this course from previous offerings. Having success after manual marking just felt bittersweet; it felt really damn wrong to have to blunder through all this bureaucratic BS just to get correctly assessed.
- If code coverage was high enough, it's worth wondering if using each group's testing suite may have actually been fine, but that's a point for another time.

It's a shame because this course genuinely has potential; OOP as a concept is pretty interesting, but like many other courses (especially certain ones I've taken previously), off the mark administration ruins the student experience. I took two courses and was still occupied ie. a disproportionate workload. It's hard to believe I was considering taking another course at the start of term, and I couldn't be happier that I didn't after how this turned out. I should also reiterate that this is NOT in any way an attack on the course staff; they clearly had the right intentions and the right rationale for their changes. It just so happens that the final product was a devastatingly poor student experience. I might add; the project is worth 35% of your total grade, the labs are a portion of 10% but I have in fact taken more away from the labs given how panic-inducing this project has been; I've also never seen an effort vs marks ratio this disproportionate, even in some parts of HSC English.

Post-exam: Literally all the problems pre-exam were compounded. I went into the exam a bit more open-minded and hoping for improvement, which unfortunately never came. The exam itself was shocking. I would not be surprised if many people failed the 40% hurdle (raw marks, before any scaling).

I will give them the fact that the theory part of the exam was pretty smooth sailing, and well written. The programming questions just about summed up the whole term. The questions were too long, too hard and too verbose. Difficulty wise: literally none of the stuff we were told to prepare with (sample questions, lab questions, tutorial questions) could match up to this in the programming section. The prep was piss-easy, this was notoriously difficult. The prep absolutely paled, and the samples were largely irrelevant because we'd seen the questions as lab problems as well. In any case, I would imagine some if not most of the students who did the recommended preparation would have been 100% screwed, which speaks to the ridiculousness of the exam.

You basically had two choices: plan out your response or dive straight in. Either way, you'd encounter time drains; diving straight in meant you couldn't properly tackle the problem, which would have been evident for a course literally called Object-Oriented Design and Programming. Planning out your response would have taken too long (as it did for me, after which I panicked and ended up half-arsing a plan and a response), leaving you with not enough time to complete the exam. The sheer verbosity and length of the exam meant it was impossible to finish; I doubt the writers of the exam took it, nor even gave it to a tutor to try because this was just frankly ridiculous. Given six hours, twice the allocated time wouldn't have saved the majority of the cohort (and it would have extended the pain and confusion anyway), who were post-exam making jokes about "haha see you next year guys". If last term's exam was just "bad" (or so I have heard), I have no choice but to brand this one absolutely fucked. I have never taken an exam written worse, nor had an exam experience worse than this, EVER (regardless of if it was self-sabotage, as has happened before, or the fault of the people involved in running the exam). It's telling that I've enjoyed courses while not doing so well and will merit courses regardless of my mark, so I think for this offering of the course I'm being more than fair.

Again, this course absolutely has the potential to be a good course, but this offering has been nothing short of shocking. I really thought the automarking saga was rock bottom, but as it turns out there was an even rockier bottom underneath. I wanted to rant more, but I'm honestly so done with this particular offering of the course; I think the fact that a) I've bumped my course rating down to NEGATIVE two says enough, and b) "I have never taken an exam written worse, nor had an exam experience worse than this, EVER" says more than enough about a course already rated 0.
« Last Edit: December 02, 2021, 11:06:35 pm by fun_jirachi »
HSC 2018: Mod Hist [88] | 2U Maths [98]
HSC 2019: Physics [92] | Chemistry [93] | English Adv [87] | 3U Maths [98] | 4U Maths [97]
ATAR: 99.05

UCAT: 3310 - VR [740] | DM [890] | QR [880] | AR [800]
Guide Links:
Subject Acceleration (2018)
UCAT Question Compilation/FAQ (2020)
Asking good questions


  • Trailblazer
  • *
  • Posts: 26
  • Respect: +34
Re: UNSW Course Reviews
« Reply #277 on: November 29, 2021, 07:36:18 pm »
Subject Code/Name: ECON3123 - Organisational Economics

Contact Hours: 2 x 1.5 hour lecture per week. 1 x 1.5 hour tutorial per week.

Assumed Knowledge: ECON2101 or ECON2112. I'd recommend taking both before this course.


4x10% Problem Sets. Two or three problems that are a bit more difficult than what was shown in tutorials or lectures.

60% Final Exam. Similar structure to the problem sets. Three questions with multiple parts. Some with calculations, and some asking you to verbally explain the underlying contract structure.

Lecture Recordings? Full lecture recordings on hand.

Notes/Materials Available: Full slides provided.


Hongyi Li, 3.5/5. This might not be a fair score for Hongyi since I had Gabriele, Federico, and Gautam last term for other third-year micro courses - I'm a bit spoiled! I've had friends say they he was one of their favourite lecturers. I enjoyed his lectures, and his notes were very comprehensive.

Year & Trimester of completion: 2021/T3

Difficulty: 4/5.

Overall Rating:  3/5.

Your Mark/Grade: TBA

Comments: This is course should really be called Contract Theory. We spent all of our time investigating interactions between principals and agents (essentially just employers and employees) under different circumstances. Principals will have one set of desired outcomes (maximise profits), and agents have another, often conflicting, set of desired outcomes (maximise pay). The fun of this course comes in playing around the different times that the principals and agents make moves, how the principal pays the agent, and how the agent produces the good. I found the weeks spent on Asset Ownership and Career Incentives to be particularly interesting because of how fun it was to keep track of all the different variables and timings that were introduced.

There are a couple of weeks in the middle that I thought were a bit of a slog - the lectures on Performance Evaluation, Teamwork, Incentives, and Authority. They each took me a while to understand the underlying interaction, but I can't say that I'm very comfortable with them.

Overall, this is a pretty fun course. I wouldn't recommend that you take this over courses like ECON3106 or ECON3121 though.

Aaaand that's it for my undergrad degree! It's been a great ride for the last four years at UNSW, even with the chaos of 2020 and 2021. I hope that my course reviews have been comprehensible and useful for everyone who has read them. I might be doing econ honours next year, so keep an eye out for a review on that at the end of next year if I'm not burnt out at the end. Thanks everyone!
« Last Edit: November 29, 2021, 07:38:29 pm by HelpICantThinkOfAName »
Studying Economics/Mathematics @ UNSW