primary standards should have a high molar mass (>110g/mol?) to minimise weight errors.
What does this mean exactly? and is specifying >110g/mol necessary?
Let's pretend you used lithium fluoride as a primary standard. Its molar mass is 6.9 + 19 = 25.9 g/mol
Now let's say you had 259 mg of LiF, but your scale uncertainty was 5 mg. The uncertainty in the number of moles is then 5 mg/molar mass.
However, if your primary standard was KCl, for instance, with a molar mass of 39 + 35.5 = 74.5 g/mol, the numerical uncertainty in the number of moles would decrease given the same mass used.
Why is it that an increase in temperature for an exothermic reaction causes K (eq constant) to decrease?
Conversely, why is does an increase in temperature for an endothermic reaction cause K to increase?
There are a few ways to explain this.
1. Using the explicit dependence of the equilibrium constant on temperature. This is out of the course
2. Le Chatelier's principle (this IS in the course)
OK, so let's say you have A => B and this is exothermic. If we add heat, the system will attempt to partially negate this disturbance by reducing the system temperature. The backwards reaction does this, so the equilibrium shifts backwards, forming more reactant and less product and thus decreases K. Try use this logic for your endothermic reaction.
3. Considering the effects on the reaction rate. Heating a reaction flask actually increases the rate of the forward AND backward reaction. It can be proved that for an exothermic reaction, heating the reaction flask increases the forward reaction rate less than that of the backward reaction rate, and vice-versa for cooling. This is again out of the course.