fedflow.utils package¶
Fedflow utils¶
Some utils for fedflow.
- class fedflow.utils.ModuleUtils[source]¶
Bases:
object- classmethod migrate_module(src: str, dst: str, dst_name: Optional[str] = None) → None[source]¶
migrate a module from
srctodst, and rename it todst_name.- Parameters
src – the module source dir.
dst – target dir.
dst_name – new module name, the module name will keep if this param is None.
- Returns
- classmethod import_module(name: str, path: Optional[str] = None)[source]¶
Import the module dynamically
- Parameters
path – module path
name – module name
- Returns
module
Data utils¶
Some classes or methods for preprocessing datasets.
Trainers¶
Some classes or methods for training.
- class fedflow.utils.trainer.SupervisedTrainer(model, optimizer, criterion, lr_scheduler=None, *, init_model_path=None, init_optim_path=None, dataset=None, batch_size=32, epoch=50, epoch_action=None, checkpoint_interval=10, device='cuda:0', console_out=None)[source]¶
Bases:
objectA trainer used for supervised training by pytorch.
After training of this trainer, there are 4 files will appeared in current dir:
history.json: the data of history(contains loss and acc of train and validate).
history.png: the chart of history
parameter.pth: the parameters of model
optimizer.pth: the parameters of optimizer
- class History(train_loss, train_acc, val_loss, val_acc, lr)¶
Bases:
tupleRecord history data during training
- property lr¶
learning rate of every epoch
- property train_acc¶
train accuracy of every epoch
- property train_loss¶
train loss of every epoch
- property val_acc¶
validate accuracy of every epoch
- property val_loss¶
validate loss of every epoch
- __init__(model, optimizer, criterion, lr_scheduler=None, *, init_model_path=None, init_optim_path=None, dataset=None, batch_size=32, epoch=50, epoch_action=None, checkpoint_interval=10, device='cuda:0', console_out=None)[source]¶
Construct a trainer.
- Parameters
model – an instance of
torch.nn.Module.optimizer – an instance of pytorch optimizer.
criterion – loss function
lr_scheduler – an instance of
torch.optim.lr_scheduler._LRSchedulerortorch.optim.lr_scheduler.ReduceLROnPlateauinit_model_path – the init model parameters path.
init_optim_path – the init optimizer parameters path.
dataset – the datasets used for this trainer.
batch_size – the batch size
epoch – the epoch
epoch_action –
when every epoch finished, the epoch_action method will be called. In this method, you can update
lretc. The follow is an example of epoch_action:>>> class EpochAction(object): >>> def __init__(self, optim): >>> super(SupervisedTrainer, self).__init__() >>> self.reduce = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, mode="max") >>> >>> # The complete method signature is: >>> # def __call__(self, *, model, optimizer, criterion, lr_scheduler, >>> # train_loss, train_acc, val_loss, val_acc, lr): >>> def __call__(self, *, val_acc, *args, **kwargs): >>> self.reduce.step(val_acc)
checkpoint_interval – the interval of save parameters, the trainer will not save parameters if this param if 0.
device – the device used for training.
console_out – redirect print.
- mount_dataset(dataset, val_dataset=None, *, val_ratio=0.3, batch_size=32) → None[source]¶
mount dataset to this trainer.
- Parameters
dataset – the complete dataset or train dataset.
val_dataset – validate dataset, if it’s None, this method will split validate dataset from
dataset.val_ratio – the ratio of validate dataset when split.
batch_size – the batch size.
- Returns
- mount_dataloader(train_dataloader, val_dataloader) → None[source]¶
Generally, this method is not recommended.
Only when the
mount_datasetmethod unmet demand, you can directly mount atrain_dataloaderand aval_dataloader.- Parameters
train_dataloader – dataloader used for training.
val_dataloader – dataloader used for validating.
- Returns