I have a dilemma. I do not know what is the best approach to the following scenario and then if it makes sense to invest time on developing a kernel module.
I have hardware (FPGA) that is exposed like many modules (around 30). Each module can be defined like:
- Base address of the module;
- Fields' offset (from base address);
- The maximum number of fields per modules is around 10;
- Each field has its own type like uint32_t, float32_t, uint32_t[] etc;
- Some fields are read/write only and other read only;
- Usually a module is ready as is. I mean that it is not necessary to implement any logic to check if it is possible to write to a field (except in few cases).
On the target device there is a custom Linux distribution (built from Yocto).
What do you think is better?
Application in user space that uses mmap (/dev/mem to map all modules) and then reads/writes directly from/to memory. I have a C++ implementation and it is working but maybe it is not the best solution... I need to set manually all offsets, using many reinterpret_cast<> to read data properly and if something it is wrong the application crashes;
Implement a character device driver to expose each module like /dev/module1, /dev/module2 etc? and use in user space open/write/read/release/ioctl. I have just started to read a huge manual about Linux kernel development and I am not so sure if a character device is a good idea here, especially how to expose so many modules with so many fields to user space;
- Other.
Thank you a lot for any ideas.