There is no effective way to change block size "in place". The concept of block size is tightly tied to the on-disk layout of block files at DataNodes, so it's non-trivial to change this.
As far as running a distributed job to do this, it's possible to use distcp with an override of the block size on the command line. (See example below.) This does however cause a temporary doubling of the storage consumed.
> hadoop distcp -D dfs.blocksize=268435456 /input /output
> hdfs dfs -stat 'name=%n blocksize=%o' /input/hello
> hdfs dfs -stat 'name=%n blocksize=%o' /output/hello