Not sure I'm understanding your question... There is no direct communication between the source and the sink, but they are connected through the channel. The source puts Flume events into the channel, and the sink gets those events; such puts and gets are disk writes and reads in your case, since you are using a file channel; if you were using a memory channel, the writes and reads would be related to volatile memory.
Maybe you are wondering if you can ignore the channel and directly pass the Flume events the source builds to the sink... In that case Flume has no sense, since such an architecture ignores all the benefits of using an internal channel (reliability, fault tolerance, peaks of loads absorption...), and a simple script or application in charge of receiving the data and directly writing to Kafka may fit your needs (of course, such a solution will not scale, will not be fault tolerant, you will have to deal with http-like reception of data and Kafka API for outputting the data, and many more things you miss if Flume is not used).
Finally, you may be asking for persistent connection between the external data source sending the data and the Flume http source receiving it... That's interesting, and I think you could achieve such persistent connection by sending a large value regarding the keep-alive http header.