For us it has worked out quite well. The nfs server has been running stable and the bucket size is about 90Gb, and growing — and is largely big images that are processed further in a pipeline. Our nfs server is only accessed by the airflow instance and some developers, so its not tested heavily on massive concurrency.
There are some things that gave us some confusion. For example, we had a user that created a folder in the bucket to clean things up, but that would not show on the mounted drive. (https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/semantics.md#files-and-dirs)
The solution for that issue, was to create the folder using the mounted drive instead of the gcp console window (mkdir “the directory”), and it all worked out nicely.
I would, however, recommend that if you implement this you also make a plan to use native libraries to access cloud storage as that will be much more efficient and future proof :)