Mongo doesn't reclaim disk space after drop collection and can't find collection based on wiredTiger metadata uri

Because we are short for disk space , I droped one conllection that is not useful. But the disk space remains the same after I droped a conllection.

I went to the data directory find one metadata file is the biggest file (317G), and I can’t find which conllection is using this file, I checked every conllection’s uri, found no conllection has this uri, but this metadata file still exist in data dirctory and occupy a lot of disk space.

Anyone know why??

Welcome to the forum @Rachel_Qiu

Did you check every collection in every database? MongoDB Compass is handy here to sort the databases and collections by storagesize.

db.adminCommand({listDatabases:1}).databases.forEach(
  function(database){
    db = db.getSiblingDB(database.name); 
    db.getCollectionNames().forEach(
      function(collection){
        print([database.name, collection].join('.'),'\t',db.getCollection(collection).stats().wiredTiger.uri)
      }
    )
  }
) 

yes, I did check and I ran the function you give me, that biggest metadata file is not on the list.

MongoDB / WiredTiger will keep the allocated disk space reserved because it assumes that you will re-use the space with new data at some point. I think if you want it to actually release the used space, you would have to run a compact operation. You can find the documentation here:

@frankshimizu @Rachel_Qiu had dropped the collection not just delete documents. This usually result in the collection file being removed.

Also as @Rachel_Qiu points out none of the collection uri points to the file in question. So which collection would need this compacting?

@Rachel_Qiu What mongod version is running ?

1 Like

dropped the collection not just delete documents. This usually result in the collection file being removed.
Thanks for clarifying that and sorry for getting that wrong.

1 Like

there isn’t enough space for compact operation, I think compact operation need at least one time more than the existing data

I already solved this problem by restart mongo, the secondary node had reclaimed the disk space and that big metadata file disappeared. and I renamed the metadata file on the primary node, it didn’t affect the service so I move it to other machine.

thank you for asking :grinning:

although I don’t know why

Hi @Rachel_Qiu

Could you provide the details mentioned by @chris, namely:

  1. Your MongoDB server version and your OS version
  2. The deployment topology (replica set/sharded cluster)
  3. How did you perform the drop command and from where

Best regards,
Kevin

  1. version: 3.4.6
  2. replica set
  3. db.COLLECTION_NAME.drop()

Hi @Rachel_Qiu

version: 3.4.6

That is quite an old version of MongoDB (released way back in July 2017). This version is affected by SERVER-31101 which was fixed in MongoDB 3.4.11 and newer. Note that MongoDB 3.4 series are not supported anymore since January 2020.

I would encourage you to upgrade to a supported versions of MongoDB.

Best regards,
Kevin

2 Likes

@kevinadi

thanks Kevin
I will consider that if upgrade to a new version doesn’t need data migration

1 Like