Yes @Saravanan Selvam. If the record is large and if it can't fit into a split file then broken record will be created and placed in the new split file. Also it depends on the compression codec available in HDFS. Inside hadoop there are multiple ways of compressing a file like record compressed and block compressed. However the sync marker will be available to identify the record beginning and end. These record splits are handled by clients by InputFormat.getSplits.
I came across a brief and clear explanation same kind of question. Please do check it.
https://stackoverflow.com/questions/14291170/how-does-hadoop-process-records-split-across-block-boun...
Hope it Helps!!