Let me start by making it absolutely clear - RAID1 is not intended to do anything other than provide disk failure related fault tolerance - and the only time a disk should be removed is to replace it after failure. You may feel that it can be made to deliver other functionality, but that is not what it was designed to do and you run the risk of data loss and or corruption.
You'll find that a lot of IT related equipment works well when used as designed - it's when you try to make it do things that it was not designed to do, that you'll experience problems - this is a NAS server - it's designed for you to put disks in it, and then store data on them, if you want something to be swapping media in and out of, then you need to buy a removable media drive.
For what it's worth - you run the risk of losing data if you swap drives around in pretty much every RAID device I've seen, and that includes some very expensive SAN storage arrays - the first thing you learn when dealing with RAID arrays is not to remove drives unless absolutely necessary, and to mark them so you can put them back exactly where you took them from.
For protection against human error, you back your data up - RAID1 is NOT a form of backup - it is specifically intended to provide
disk failure related fault tolerance - to reduce (and hopefully) eliminate the possible impact of a disk failure.
If you are running a system with out fault tolerant disks, and there is a disk failure, your system stops functioning until such time as the failed disk can be replaced and the data restored from a back up (if available), if you are running a system with fault tolerant disks and a disk fails, the system
should continue to run and depending on it's design may allow you to take corrective action without shutting it down, or may require you to shut it down to take corrective action at some later, hopefully more convenient time - the point here is that the system did not stop functioning when the disk failed, it, so to speak, tolerated the fault.
Before I answer your question on converting RAID1 to JBOD, I'd like to point out something - JBOD - as configured on a DNS-323 - forms a single storage volume, concatenating the two drives, it does not give you two separate drives, and one thing you need to be aware of, is that with concatenated drives, failure of either drive can result in ALL of the stored data being lost - theoretically, only the data stored on the failed disk is gone, and whatever is on the remaining disk should be available, but in reality this has not been my experience.
If you want to convert your RAID1 array to separate volumes open the web admin interface, select TOOLS, RAID - set the RAID type and allow it to reformat - you will of course lose any data stored on the drives. You can theoretically avoid the loss of the data by removing one of the drives before reformatting the other, it
should be recognized as a separate drive when reinserted after the format - however, consider this - if your original data, and your backup copy are on the same device, what do you do if that device (not the disk, the device itself) fails?