Home
Fundamentals
Research Data Management
FAIR Data Principles
Metadata
Ontologies
Data Sharing
Data Publications
Data Management Plan
Version Control & Git
Public Data Repositories
Persistent Identifiers
Electronic Lab Notebooks (ELN)
DataPLANT Implementations
Annotated Research Context
User Journey
ARC specification
ARC Commander
QuickStart
QuickStart (Experts)
Swate
QuickStart
Walk-through
Best Practices For Data Annotation
DataHUB
DataPLAN
Ontology Service Landscape
ARC Commander Manual
Setup
Git Installation
ARC Commander Installation
Windows
MacOS
Linux
ARC Commander DataHUB Access
Before we start
Central Functions
Initialize
Clone
Connect
Synchronize
Configure
Branch
ISA Metadata Functions
ISA Metadata
Investigation
Study
Assay
Update
Export
ARCitect Manual
Installation - Windows
Installation - macOS
Installation - Linux
QuickStart
Swate Manual
Swate Installation
Excel Browser
Excel Desktop
Windows – installer
Windows – manually
macOS – manually
Organization-wide
Core Features
Annotation tables
Building blocks
Building Block Types
Adding a Building Block
Using Units with Building Blocks
Filling cells with ontology terms
Advanced Term Search
Templates
File Picker
Expert Features
Contribute Templates
ISA-JSON
DataHUB Manual
Overview
User Settings
Generate a Personal Access Token (PAT)
Projects Panel
ARC Panel
Forks
Working with files
ARC Settings
ARC Wiki
Groups Panel
Create a new user group
Data publications
Passing Continuous Quality Control
Submitting ARCs with ARChigator
Track publication status
Use your DOIs
Guides
ARC User Journey
Create your ARC
ARC Commander QuickStart
ARC Commander QuickStart (Experts)
ARCitect QuickStart
Annotate Data in your ARC
Annotation Principles
ISA File Types
Best Practices For Data Annotation
Swate QuickStart
Swate Walk-through
Share your ARC
Register at the DataHUB
Invite collaborators to your ARC
Recommended ARC practices
Syncing recommendation
Keep files from syncing to the DataHUB
Working with large data files
Adding external data to the ARC
ARCs in Enabling Platforms
Publication to ARC
Contribute
Swate Templates
Knowledge Base
Teaching Materials
Slides
DataPLANT
Annotated Research Context
Videos
Start Your ARC Series
Events 2023
Nov: CEPLAS PhD Module
Oct: CSCS CEPLAS Start Your ARC
Sept: MibiNet CEPLAS Start Your ARC
July: RPTU Summer School on RDM
July: Data Steward Circle
May: CEPLAS Start Your ARC Series
Frequently Asked Questions
last updated at 2023-06-28
About this guide
In this guide we show you how you can actively handle large data files in your ARC.
UserAdvanced
ModeTutorial
⚠️
This guide presents an interim solution. We are working on a more user-friendly implementation.
Before we can start
☑️ You have created an ARC before using the ARC Commander
☑️ The latest version of the ARC Commander is installed on your computer
☑️ You have a DataPLANT account
☑️ Your computer is linked to the DataHUB via personal access token
Large File Storage (LFS)
ARCs and the DataHUB come with a mechanism to sync and store large files called Large File Storage (LFS). LFS is an efficient way to store your large data files. These files are called "LFS objects". Rather than checking every file during every arc sync
, the ARC Commander first checks wether there was a change at all. And only if this is the case, it scans what was changed. This way it saves time and computing power compared to always scanning all large files for possible changes.
By default, the ARC Commander tracks the following files via LFS:
- All files stored in an assay's
dataset
folder, and
- All files with a size larger than 150 MB.
The threshold of 150 MB can easily be adjusted using the ARC Commander. For instance, if you want to increase it to 250 MB (i.e. 250000000 bytes), run
arc config set -g -n "general.gitlfsbytethreshold" -v "250000000"
💡 The LFS system is also the reason why git LFS needs to be installed prior to using the ARC Commander.
Track files via LFS
In addition to the defaults, you can also actively choose, which files to track via LFS.
- Update your local ARC via
arc sync
- Add large files or folders by copying or moving them to your ARC
- Track files via
git lfs track "<path/to/FolderWithLargeFiles/**>"
git add .gitattributes
- Sync your ARC to the DataHUB via
arc sync
- Open your ARC in the DataHUB and navigate to the folder with LFS objects and see them flagged as "LFS".
Downloading an ARC without large data files
Sometimes you may want to download your ARC to a smaller computer, where you do not need a full copy of your ARC including all its large data files. For instance, you just want to work with smaller derived data sets or want to update ISA metadata.
In this case, you can add the -n
or --nolfs
flag to your arc get
command:
arc get --nolfs -r https://gitlab.nfdi4plants.de/<YourUser>/<YourARC>
For example, have a look at the example ARC https://gitlab.nfdi4plants.de/brilator/Facultative-CAM-in-Talinum.
In the DataHUB this ARC has a storage volume of ~11GB, a lot of which comes from the large RNASeq data files flagged as "LFS".
You can download this ARC without the LFS objects via
arc get --nolfs -r https://gitlab.nfdi4plants.de/brilator/Facultative-CAM-in-Talinum
⚠️ Even without LFS objects this ARC still takes ~1GB of space.
Keep LFS objects from syncing
To make sure that also during an upcoming arc sync
, LFS objects are not downloaded, you need to change the LFS option on that particular machine for this ARC.
Navigate to your ARC (Facultative-CAM-in-Talinum
) and execute the following two commands:
git config --local filter.lfs.smudge "git-lfs smudge --skip -- %f"
git config --local filter.lfs.process "git-lfs filter-process --skip"
Selectively download large files
If at some point you wish to selectively download one or more of the LFS objects of your ARC to that machine, you can do so via git lfs pull --include "<path/to/fileOrFolder>"
For example, the following command will download one of the large RNASeq data files.
git lfs pull --include "assays/Talinum_RNASeq_minimal/dataset/DB_097_CAMMD_CAGATC_L001_R1_001.fastq.gz"
DataPLANT Support
Besides these technical solutions, DataPLANT supports you with community-engaged data stewardship. For further assistance, feel free to reach out via our
helpdesk
or by contacting us
directly
.