Why when reading analog data we divide by 65535. I know it’s the bit value of 3.3 volt but why is this step needed?
Also which is correct:
calculated voltage = raw data * (hardware.voltage() /65535)
Calculated voltage = hardware.voltage() * (raw data/65535)
What does “converting the number from the ADC to a voltage” mean?
The ADCs return a number which represents the analog voltage sampled; the lowest number they can return is zero, representing 0v, and the highest is 65535, which represents the supply voltage.
This number doesn’t directly map to a voltage. Say you applied 1.5v to a pin and read this. If your chip was supplied at 3.0v, then you would read (1.5/3.0)*65535 = 32767 or so… but, with the same 1.5v pin voltage and a 3.3v supply voltage you’d read (1.5/3.3)*65535 = 29788.
To work out the actual voltage of a pin, you first use hardware.voltage(), which internally reads a known reference (fixed) voltage with the ADC and uses that reading to determine what the actual supply voltage of the chip is, which it then returns. You then scale this by the number you read, as whatever voltage is on your pin is a faction of the supply.
Question, if my sensor’s output was somthing like 20517, do I need to do a conversion?
If you have an imp running at 3.3v, then 20517 corresponds to about (20517/65535)*3.3 = 1.033v.
Your sensor datasheet will tell you what the output voltage corresponds to. What is the sensor?