The survey data were collected via a web-based front-end for users and output as a mixture of numeric and textual response.  The responses were hand-coded to correspond to the desired numerical outcomes, for both the nominnal (language) and ordinal (Likert) responses.

Cleaning the Data
On solutions (questions C1a-C2g) we code a NO (didn't use technique) with an automatic 0 in the impact column to capture complete ineffectiveness, equivalent to not not using the solution at all.  If they did not respond at all we assumed it was a "Not Used" with a coding of 0.  This is not exactly the same since someone who didn't use a technique is not the same as someone who used a technique but experienced a "0" for effectiveness, but there were no YES techniques with 0 effectiveness.

Although the effectiveness scale for solution employed was supposed to have only responses {0,1,3 and 5}, the data came back {0,1,2,3,4 and 5}, yielding a 6-step scale which we implement as follows:

5 = most effective remedial action(s)
4 = More effective
3 = some postive effect on the solution
2 = hard to tell
1 = very little impact on the solution
0 = no impact on the solution

Question C1o, Other Solution, does not have coding other than the "please list," whereas C2g, Other Non-Solution, does have coding (YES/NO, Value, and Specify).  In the former case, we had

Question D (D1a, D1b, D1c).  Unless they responded with a YES to other comments, a blank was treated as a NO.

Treatment of Missing Data
For the Lickert scale questions, several responses were 0, outside the allowed range of 1-5 or whatever.  Presumably, this is due to lack of response.  These data are treated as missing.

Missing data are impossible to determine from the Cause questions since the scale includes the Zero response (no importance at all).