V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
saberQi
V2EX  ›  程序员

更换内存条之后,深度学习的训练速度下降

  •  
  •   saberQi · 320 天前 · 1362 次点击
    这是一个创建于 320 天前的主题,其中的信息可能已经有所发展或是发生改变。

    当我将 16G 内存更换为 32G 内存之后,基于 mmorotate 的训练时间反而增强了。 这是我的训练日志:

    2023-06-08 19:36:31,696 - mmrotate - INFO - Environment info:
    ------------------------------------------------------------
    sys.platform: linux
    Python: 3.7.15 (default, Nov 24 2022, 21:12:53) [GCC 11.2.0]
    CUDA available: True
    GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 11.3, V11.3.109
    GCC: gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0
    PyTorch: 1.10.1
    PyTorch compiling details: PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - LAPACK is enabled (usually provided by MKL)
      - NNPACK is enabled
      - CPU capability usage: AVX512
      - CUDA Runtime 11.3
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
      - CuDNN 8.2
      - Magma 2.5.2
      - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 
    
    TorchVision: 0.11.2
    OpenCV: 4.6.0
    MMCV: 1.7.0
    MMCV Compiler: GCC 9.3
    MMCV CUDA Compiler: 11.3
    MMRotate: 0.3.4+794a319
    ------------------------------------------------------------
    
    2023-06-08 19:36:31,954 - mmrotate - INFO - Distributed training: False
    2023-06-08 19:36:32,192 - mmrotate - INFO - Config:
    dataset_type = 'HRSCDataset'
    data_root = 'data/hrsc/'
    img_norm_cfg = dict(
        mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
    train_pipeline = [
        dict(type='LoadImageFromFile'),
        dict(type='LoadAnnotations', with_bbox=True),
        dict(type='RResize', img_scale=(1333, 800)),
        dict(type='RRandomFlip', flip_ratio=0.5),
        dict(
            type='Normalize',
            mean=[123.675, 116.28, 103.53],
            std=[58.395, 57.12, 57.375],
            to_rgb=True),
        dict(type='Pad', size_divisor=32),
        dict(type='DefaultFormatBundle'),
        dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
    ]
    test_pipeline = [
        dict(type='LoadImageFromFile'),
        dict(
            type='MultiScaleFlipAug',
            img_scale=(800, 800),
            flip=False,
            transforms=[
                dict(type='RResize'),
                dict(
                    type='Normalize',
                    mean=[123.675, 116.28, 103.53],
                    std=[58.395, 57.12, 57.375],
                    to_rgb=True),
                dict(type='Pad', size_divisor=32),
                dict(type='DefaultFormatBundle'),
                dict(type='Collect', keys=['img'])
            ])
    ]
    data = dict(
        samples_per_gpu=2,
        workers_per_gpu=2,
        train=dict(
            type='HRSCDataset',
            classwise=False,
            ann_file='data/hrsc/ImageSets/trainval.txt',
            ann_subdir='data/hrsc/FullDataSet/Annotations/',
            img_subdir='data/hrsc/FullDataSet/AllImages/',
            img_prefix='data/hrsc/FullDataSet/AllImages/',
            pipeline=[
                dict(type='LoadImageFromFile'),
                dict(type='LoadAnnotations', with_bbox=True),
                dict(type='RResize', img_scale=(1333, 800)),
                dict(type='RRandomFlip', flip_ratio=0.5),
                dict(
                    type='Normalize',
                    mean=[123.675, 116.28, 103.53],
                    std=[58.395, 57.12, 57.375],
                    to_rgb=True),
                dict(type='Pad', size_divisor=32),
                dict(type='DefaultFormatBundle'),
                dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
            ]),
        val=dict(
            type='HRSCDataset',
            classwise=False,
            ann_file='data/hrsc/ImageSets/test.txt',
            ann_subdir='data/hrsc/FullDataSet/Annotations/',
            img_subdir='data/hrsc/FullDataSet/AllImages/',
            img_prefix='data/hrsc/FullDataSet/AllImages/',
            pipeline=[
                dict(type='LoadImageFromFile'),
                dict(
                    type='MultiScaleFlipAug',
                    img_scale=(800, 800),
                    flip=False,
                    transforms=[
                        dict(type='RResize'),
                        dict(
                            type='Normalize',
                            mean=[123.675, 116.28, 103.53],
                            std=[58.395, 57.12, 57.375],
                            to_rgb=True),
                        dict(type='Pad', size_divisor=32),
                        dict(type='DefaultFormatBundle'),
                        dict(type='Collect', keys=['img'])
                    ])
            ]),
        test=dict(
            type='HRSCDataset',
            classwise=False,
            ann_file='data/hrsc/ImageSets/test.txt',
            ann_subdir='data/hrsc/FullDataSet/Annotations/',
            img_subdir='data/hrsc/FullDataSet/AllImages/',
            img_prefix='data/hrsc/FullDataSet/AllImages/',
            pipeline=[
                dict(type='LoadImageFromFile'),
                dict(
                    type='MultiScaleFlipAug',
                    img_scale=(800, 800),
                    flip=False,
                    transforms=[
                        dict(type='RResize'),
                        dict(
                            type='Normalize',
                            mean=[123.675, 116.28, 103.53],
                            std=[58.395, 57.12, 57.375],
                            to_rgb=True),
                        dict(type='Pad', size_divisor=32),
                        dict(type='DefaultFormatBundle'),
                        dict(type='Collect', keys=['img'])
                    ])
            ]))
    evaluation = dict(interval=1, metric='mAP')
    optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
    optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
    lr_config = dict(
        policy='step',
        warmup='linear',
        warmup_iters=500,
        warmup_ratio=0.3333333333333333,
        step=[24, 33])
    runner = dict(type='EpochBasedRunner', max_epochs=36)
    checkpoint_config = dict(interval=1)
    log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
    dist_params = dict(backend='nccl')
    log_level = 'INFO'
    load_from = None
    resume_from = None
    workflow = [('train', 1)]
    opencv_num_threads = 0
    mp_start_method = 'fork'
    angle_version = 'le90'
    model = dict(
        type='OrientedRCNN',
        backbone=dict(
            type='ResNet',
            depth=50,
            num_stages=4,
            out_indices=(0, 1, 2, 3),
            frozen_stages=-1,
            norm_cfg=dict(type='BN', requires_grad=True),
            norm_eval=True,
            style='pytorch',
            init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
        neck=dict(
            type='FPN',
            in_channels=[256, 512, 1024, 2048],
            out_channels=256,
            num_outs=5),
        rpn_head=dict(
            type='OrientedRPNHead',
            in_channels=256,
            feat_channels=256,
            version='le90',
            anchor_generator=dict(
                type='AnchorGenerator',
                scales=[8],
                ratios=[0.5, 1.0, 2.0],
                strides=[4, 8, 16, 32, 64]),
            bbox_coder=dict(
                type='MidpointOffsetCoder',
                angle_range='le90',
                target_means=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
                target_stds=[1.0, 1.0, 1.0, 1.0, 0.5, 0.5]),
            loss_cls=dict(
                type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
            loss_bbox=dict(
                type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)),
        roi_head=dict(
            type='OrientedStandardRoIHead',
            bbox_roi_extractor=dict(
                type='RotatedSingleRoIExtractor',
                roi_layer=dict(
                    type='RiRoIAlignRotated',
                    out_size=7,
                    num_samples=2,
                    num_orientations=8,
                    clockwise=True),
                out_channels=256,
                featmap_strides=[4, 8, 16, 32]),
            bbox_head=dict(
                type='RotatedShared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=1,
                bbox_coder=dict(
                    type='DeltaXYWHAOBBoxCoder',
                    angle_range='le90',
                    norm_factor=None,
                    edge_swap=True,
                    proj_xy=True,
                    target_means=(0.0, 0.0, 0.0, 0.0, 0.0),
                    target_stds=(0.1, 0.1, 0.2, 0.2, 0.1)),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
        train_cfg=dict(
            rpn=dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.7,
                    neg_iou_thr=0.3,
                    min_pos_iou=0.3,
                    match_low_quality=True,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=256,
                    pos_fraction=0.5,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=False),
                allowed_border=0,
                pos_weight=-1,
                debug=False),
            rpn_proposal=dict(
                nms_pre=2000,
                max_per_img=2000,
                nms=dict(type='nms', iou_threshold=0.8),
                min_bbox_size=0),
            rcnn=dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.5,
                    neg_iou_thr=0.5,
                    min_pos_iou=0.5,
                    match_low_quality=False,
                    iou_calculator=dict(type='RBboxOverlaps2D'),
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RRandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                pos_weight=-1,
                debug=False)),
        test_cfg=dict(
            rpn=dict(
                nms_pre=2000,
                max_per_img=2000,
                nms=dict(type='nms', iou_threshold=0.8),
                min_bbox_size=0),
            rcnn=dict(
                nms_pre=2000,
                min_bbox_size=0,
                score_thr=0.05,
                nms=dict(iou_thr=0.1),
                max_per_img=2000)))
    work_dir = './work_dirs/oriented_rcnn_r50_fpn_3x_hrsc_le90_no'
    auto_resume = False
    gpu_ids = range(0, 1)
    

    这是我没有更换内存所需要的训练时间:

    2023-04-10 19:13:11,617 - mmrotate - INFO - Epoch [1][50/309]   lr: 3.987e-03, eta: 3:40:53, time: 1.197, data_time: 0.050, memory: 4030, loss_rpn_cls: 0.2154, loss_rpn_bbox: 0.0664, loss_cls: 0.0827, acc: 98.8281, loss_bbox: 0.0113, loss: 0.3759, grad_norm: 3.0720
    

    这是我进行内存更换后的训练时间:

    23-06-10 13:47:24,479 - mmrotate - INFO - Epoch [1][50/309]    lr: 3.987e-03, eta: 4 days, 14:54:31, time: 36.055, data_time: 0.055, memory: 5111, loss_rpn_cls: 0.2287, loss_rpn_bbox: 0.1007, loss_cls: 0.1085, acc: 97.1152, loss_bbox: 0.0055, loss: 0.4434, grad_norm: 3.3885
    

    请问有人碰到过这种情况吗? 怎么进行解决?

    6 条回复    2023-06-14 13:33:40 +08:00
    DigitalFarmer
        1
    DigitalFarmer  
       320 天前 via Android
    没遇到过。。。去 pytorch 的 GitHub 问问?
    lloovve
        2
    lloovve  
       320 天前 via iPhone
    如果内存不一样,注意双通道插法,具体可以百度
    laqow
        3
    laqow  
       319 天前
    如果主板自己焊了半条内存,可能只能插和它一样的内存,不然通道数减半
    NetLauu
        4
    NetLauu  
       319 天前
    内存没有用双通道插法吧,或者内存频率低了
    saberQi
        5
    saberQi  
    OP
       319 天前
    我的电脑是笔记本 用的是戴尔 G15 然后使用的内存时三星 DDR4 3200 *2 应该是双通道吧..
    @NetLauu #4
    @lloovve #2
    saberQi
        6
    saberQi  
    OP
       319 天前
    @laqow #3 之前也是三星的...
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   我们的愿景   ·   实用小工具   ·   3195 人在线   最高记录 6543   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 35ms · UTC 12:16 · PVG 20:16 · LAX 05:16 · JFK 08:16
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.