Conversation
Allow users to upload an entire folder (with subfolders) into a vault, preserving the directory structure. Uses ACTION_OPEN_DOCUMENT_TREE for folder selection and recursively creates vault folders before uploading files. Resolves #63
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
WalkthroughAdds end-to-end folder and file upload support across domain, presentation, and data layers. Domain: new StreamHelper, FileUploadedCallback, FolderCreatedCallback, UploadFolderStructure, UploadFolderFiles; UploadFiles and UploadFile updated; tests added. Presentation: folder-picker UI, dialogs (replace/resume/cancel), UploadService foreground service, UploadNotification, lease management (KeepAliveLease, KeepAliveLeaseManager), app-level upload APIs, DI exposure, builders, adapters, resources, and tests. Data: DB schema bumped to 14 with Upgrade13To14, UploadCheckpointEntity, UploadCheckpointDao, and migration tests. Manifest and build dependency updates included. Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderStructure.java (1)
22-28: Mutable internal lists exposed via getters.
getFiles()andgetSubfolders()return the backingArrayListdirectly, so any caller can mutate the structure without going throughaddFile/addSubfolder. If this class is meant to be built incrementally and then consumed as read-only, consider returningCollections.unmodifiableList(...)from the getters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderStructure.java` around lines 22 - 28, The getters getFiles() and getSubfolders() expose the mutable backing lists directly; change them to return read-only views by wrapping the internal lists with Collections.unmodifiableList(...) (and add the java.util.Collections import) so callers cannot mutate the internal state; keep addFile/addSubfolder methods for controlled mutation and ensure the returned unmodifiable lists reflect the current contents without allowing external modification.domain/src/main/java/org/cryptomator/domain/usecases/cloud/StreamHelper.java (1)
16-27:closeQuietlyon the output stream may silently swallow flush/write errors.
copy()closes both streams viacloseQuietlyin thefinallyblock. Ifout.close()fails (e.g., aBufferedOutputStreamthat flushes on close, or a remote-backed stream), the data-loss error is silently swallowed and the caller believes the copy succeeded.Consider closing
outnormally (propagating the exception) and only usingcloseQuietlyforinand foroutin the error/finally-cleanup path.♻️ Proposed fix
static void copy(InputStream in, OutputStream out) throws IOException { byte[] buffer = new byte[BUFFER_SIZE]; try { int read; while ((read = in.read(buffer)) != EOF) { out.write(buffer, 0, read); } + out.close(); } finally { closeQuietly(in); closeQuietly(out); } }Calling
out.close()inside thetryis safe becausecloseQuietly(out)infinallyis a no-op on an already-closed stream. On the happy path any flush/write error inclose()now propagates; on the error path the stream is still cleaned up quietly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/StreamHelper.java` around lines 16 - 27, The copy method currently closes both streams via closeQuietly in the finally block which can swallow flush/write exceptions from the OutputStream; modify StreamHelper.copy so that after the write loop completes you explicitly call out.close() inside the try block (so any flush/close exceptions propagate) and in the finally block still call closeQuietly(in) and closeQuietly(out) as a safety cleanup; update references to copy, closeQuietly, BUFFER_SIZE and EOF accordingly so the happy-path close propagates errors but resources are still quietly cleaned up on error.domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java (1)
85-131: Extract duplicated upload/copy logic into a shared utility class.
copyDataToFile()is identical in both classes (lines 119-131 in UploadFolderFiles, lines 98-110 in UploadFiles). Additionally, thecancelledFlagpattern,onCancel()method,execute()exception-wrapping, and most of theupload()andwriteCloudFile()implementations are duplicated across both classes. Extract these into utility methods (following the existingStreamHelperpattern) to reduce maintenance burden and inconsistency risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java` around lines 85 - 131, There is duplicated logic between UploadFolderFiles and UploadFiles (copyDataToFile, upload/writeCloudFile flows, cancelledFlag/onCancel pattern and execute exception wrapping); extract a shared utility (e.g., UploadUtils) containing: a copyDataToFile(DataSource, Context, cancelledFlag) that reproduces the existing behavior used by both classes, a writeCloudFile helper that wraps the cloudContentRepository.write call (taking CloudFolder/CloudFile, CancelAwareDataSource, size, replacing, ProgressAware), and shared helpers for creating CancelAwareDataSource and unified execute/exception-wrapping logic; update UploadFolderFiles and UploadFiles to call these new UploadUtils methods and remove the duplicated methods (preserve use of CancelAwareDataSource.wrap, dataSource.modifiedDate behavior, and deletion of temp file). Ensure method names referenced here (copyDataToFile, upload, writeCloudFile, CancelAwareDataSource, cancelledFlag, onCancel, execute) are replaced with calls into the shared utility.presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt (1)
1092-1092:takePersistableUriPermissionis unnecessary for a one-time manual upload.The URI permission granted by
ACTION_OPEN_DOCUMENT_TREEis valid for the lifetime of the current process, which is sufficient for the asynchronous upload that follows. CallingtakePersistableUriPermissionstores an entry inContentResolver.getPersistedUriPermissions()that survives app restarts. Repeated use of folder upload (and every device reboot and reinstall cycle) will accumulate these stale entries until they hit the system cap.♻️ Remove persistent permission grant for one-time upload
- context().contentResolver.takePersistableUriPermission(treeUri, Intent.FLAG_GRANT_READ_URI_PERMISSION)If persistent access were needed in the future (e.g., for recurring uploads), it should be explicitly stored alongside its revocation path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt` at line 1092, Remove the unnecessary persistent grant: delete the call to contentResolver.takePersistableUriPermission(treeUri, Intent.FLAG_GRANT_READ_URI_PERMISSION) in BrowseFilesPresenter (the code that handles the ACTION_OPEN_DOCUMENT_TREE result). The one-time permission granted by the document picker is sufficient for the async upload, so simply remove that invocation (leave use of treeUri for the upload intact); if persistent access is ever required in future, obtain and store it explicitly along with a revocation path instead of blindly calling takePersistableUriPermission.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 119-131: The copyDataToFile method currently creates a temp file
(created via createTempFile in copyDataToFile) and then opens streams
(CancelAwareDataSource.wrap(...).open(...) and new FileOutputStream(target)) but
if StreamHelper.copy or opening the stream throws, the temp file referenced by
target leaks; update copyDataToFile to use try-with-resources for the
InputStream and OutputStream and ensure the temp file (target) is deleted on any
exception (e.g., in a catch or finally block before rethrowing
FatalBackendException), while still preserving the setLastModified call when
copy succeeds; reference the method copyDataToFile, the target variable,
CancelAwareDataSource.wrap(...).open(...), StreamHelper.copy, createTempFile and
FatalBackendException when locating where to apply this change.
In `@presentation/build.gradle`:
- Line 188: The project is using an outdated DocumentFile dependency; update the
version constant androidxDocumentfileVersion in buildsystem/dependencies.gradle
from 1.0.1 to 1.1.0 so the dependency referenced by implementation
dependencies.documentFile (used for DocumentFile.fromTreeUri()) pulls 1.1.0;
after bumping the constant, re-sync the Gradle project to ensure implementation
dependencies.documentFile resolves to the new 1.1.0 artifact.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1108-1129: The recursive buildUploadFolderStructure(DocumentFile)
can overflow the JVM stack for deeply nested directories; replace it with an
iterative implementation that uses an explicit stack/Deque of work items (e.g.,
pairs of DocumentFile and its corresponding UploadFolderStructure) to traverse
the tree: start by creating the root UploadFolderStructure(documentFile.name ?:
"folder"), push (documentFile, root) on the stack, then loop popping a
(currentDir, currentStruct), call currentDir.listFiles(), for each child push a
new (child, newSubStruct) for directories (and call
currentStruct.addSubfolder(newSubStruct)), and for files create UploadFile via
UploadFile.anUploadFile().withFileName(...).withDataSource(UriBasedDataSource.from(child.uri)).thatIsReplacing(false).build()
and add with currentStruct.addFile(...); this preserves existing symbols
(buildUploadFolderStructure, UploadFolderStructure, UploadFile,
UriBasedDataSource, DocumentFile) and avoids recursion/StackOverflowError.
- Around line 1115-1124: child.name can be null and currently silently skips
adding the file; update the block handling child.isFile (around
UploadFile.anUploadFile(), UriBasedDataSource.from(child.uri), and
structure.addFile) to handle null names explicitly: if child.name is null, emit
a warning via the presenter/logger noting the synthetic/unnamed entry (include
child.uri for context) and either supply a deterministic fallback name (e.g.,
derived from child.uri or a generated "unnamed-<id>" token) before building the
UploadFile, or add a clear code comment if dropping is intentional; also ensure
totalFileCount() accounting matches this behavior.
- Around line 1095-1106: Replace the raw Thread that calls
buildUploadFolderStructure(...) with a lifecycle-aware RxJava Single (or at
minimum wrap the background work in try/catch/finally) so exceptions can't kill
the thread and the progress spinner is always cleared: perform
buildUploadFolderStructure(documentFile) on Schedulers.io(), observe on
AndroidSchedulers.mainThread(), handle errors in onError to call
view?.showMessage(...) and always call
view?.showProgress(ProgressModel.COMPLETED) in the terminal path
(finally/DoFinally or Single.doFinally) before returning; ensure you no longer
call activity().runOnUiThread directly after the background work but use the
Android main scheduler (or check activity() is non-null) so callbacks aren't
executed against a destroyed Activity, and keep the existing
uploadFolder(folderStructure) call in the success path.
---
Nitpick comments:
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/StreamHelper.java`:
- Around line 16-27: The copy method currently closes both streams via
closeQuietly in the finally block which can swallow flush/write exceptions from
the OutputStream; modify StreamHelper.copy so that after the write loop
completes you explicitly call out.close() inside the try block (so any
flush/close exceptions propagate) and in the finally block still call
closeQuietly(in) and closeQuietly(out) as a safety cleanup; update references to
copy, closeQuietly, BUFFER_SIZE and EOF accordingly so the happy-path close
propagates errors but resources are still quietly cleaned up on error.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 85-131: There is duplicated logic between UploadFolderFiles and
UploadFiles (copyDataToFile, upload/writeCloudFile flows, cancelledFlag/onCancel
pattern and execute exception wrapping); extract a shared utility (e.g.,
UploadUtils) containing: a copyDataToFile(DataSource, Context, cancelledFlag)
that reproduces the existing behavior used by both classes, a writeCloudFile
helper that wraps the cloudContentRepository.write call (taking
CloudFolder/CloudFile, CancelAwareDataSource, size, replacing, ProgressAware),
and shared helpers for creating CancelAwareDataSource and unified
execute/exception-wrapping logic; update UploadFolderFiles and UploadFiles to
call these new UploadUtils methods and remove the duplicated methods (preserve
use of CancelAwareDataSource.wrap, dataSource.modifiedDate behavior, and
deletion of temp file). Ensure method names referenced here (copyDataToFile,
upload, writeCloudFile, CancelAwareDataSource, cancelledFlag, onCancel, execute)
are replaced with calls into the shared utility.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderStructure.java`:
- Around line 22-28: The getters getFiles() and getSubfolders() expose the
mutable backing lists directly; change them to return read-only views by
wrapping the internal lists with Collections.unmodifiableList(...) (and add the
java.util.Collections import) so callers cannot mutate the internal state; keep
addFile/addSubfolder methods for controlled mutation and ensure the returned
unmodifiable lists reflect the current contents without allowing external
modification.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Line 1092: Remove the unnecessary persistent grant: delete the call to
contentResolver.takePersistableUriPermission(treeUri,
Intent.FLAG_GRANT_READ_URI_PERMISSION) in BrowseFilesPresenter (the code that
handles the ACTION_OPEN_DOCUMENT_TREE result). The one-time permission granted
by the document picker is sufficient for the async upload, so simply remove that
invocation (leave use of treeUri for the upload intact); if persistent access is
ever required in future, obtain and store it explicitly along with a revocation
path instead of blindly calling takePersistableUriPermission.
domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (15)
presentation/src/main/res/values/strings.xml (1)
630-640: Optional: add a unit label tonotification_upload_messagefor clarity
"Uploading %1$d/%2$d"leaves the numbers ambiguous in the notification shade (files? MB?). The analogousdialog_upload_file_remainingstring at line 406 uses"File %1$d of %2$d". A small label like"Uploading %1$d/%2$d files"would be more user-friendly, and is consistent with the finished/failed messages in the same block that all reference "files".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/res/values/strings.xml` around lines 630 - 640, Update the notification string named notification_upload_message to include a unit label so the progress is unambiguous (e.g., change the text from "Uploading %1$d/%2$d" to "Uploading %1$d/%2$d files"); locate notification_upload_message in the strings.xml near the other upload strings and mirror the phrasing used by dialog_upload_file_remaining to keep consistency with finished/failed messages.data/src/main/java/org/cryptomator/data/db/entities/UploadCheckpointEntity.java (1)
33-37:@NotNullon primitive fieldsintandlongis redundant.GreenDAO's
@NotNullannotation onint/longprimitives is a no-op at the Java level (primitives can never be null). The generated SQLite column isNOT NULLfor primitives regardless. Removing the annotations reduces noise.♻️ Proposed cleanup
- `@NotNull` private int totalFileCount; - `@NotNull` private long timestamp;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@data/src/main/java/org/cryptomator/data/db/entities/UploadCheckpointEntity.java` around lines 33 - 37, Remove the redundant `@NotNull` annotations from the primitive fields in UploadCheckpointEntity: delete `@NotNull` on the fields totalFileCount (int) and timestamp (long) since primitives cannot be null and GreenDAO already generates NOT NULL columns for them; leave `@NotNull` on non-primitive fields if present and adjust any import usage if that annotation becomes unused.presentation/src/main/java/org/cryptomator/presentation/di/component/ApplicationComponent.java (1)
7-7:UploadCheckpointDao(data layer) exposed directly through the presentation DI component — consider a domain-level abstraction.All other providers in
ApplicationComponent(vaultRepository(),cloudContentRepository(), etc.) return domain-layer interfaces. Adding a concrete data-layer DAO here creates a direct presentation→data coupling that bypasses the domain boundary and makes the presentation layer harder to test in isolation.Consider introducing a
UploadCheckpointRepositoryinterface in thedomainmodule, implementing it in thedatamodule, and providing that interface through the component instead.Also applies to: 51-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/di/component/ApplicationComponent.java` at line 7, ApplicationComponent currently exposes the data-layer UploadCheckpointDao directly (symbol: UploadCheckpointDao) which leaks the data layer into presentation; add a domain-level abstraction UploadCheckpointRepository in the domain module, implement it in the data module by adapting UploadCheckpointDao, and change ApplicationComponent to provide UploadCheckpointRepository instead of UploadCheckpointDao (update provider method signatures and any injection sites to use UploadCheckpointRepository) so the presentation layer depends only on the domain interface.presentation/src/main/java/org/cryptomator/presentation/presenter/UploadFolderStructureBuilder.kt (1)
11-13: Unbounded recursion — consider adding an explicit depth limit or iterative traversal.
buildUploadFolderStructurecalls itself for every subdirectory without any depth guard. Deeply nested folder hierarchies (e.g., auto-generated project structures) can cause aStackOverflowError. An explicit max-depth constant and an early return (or a log + skip) would add a cheap safety net.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/UploadFolderStructureBuilder.kt` around lines 11 - 13, The recursive call in buildUploadFolderStructure has no depth guard and can overflow the stack; add a max-depth safety check (e.g., MAX_UPLOAD_DEPTH constant) and thread a currentDepth parameter (or convert to an iterative traversal) so that buildUploadFolderStructure(child) is only called when currentDepth < MAX_UPLOAD_DEPTH; when the limit is reached, skip adding deeper subfolders (or log a warning) instead of recursing, ensuring addSubfolder(...) only receives built structures within the allowed depth.domain/src/main/java/org/cryptomator/domain/usecases/cloud/FileUploadedCallback.java (1)
5-5: Consider clarifying therelativePathparameter contract.Per the AI summary,
UploadFilesinvokes this callback with just a plainfileName, not a relative path. The parameter namerelativePathimplies a path with separators, which will be incorrect for single-file upload callers. A neutral name (e.g.,fileIdentifier) or a Javadoc explaining the convention (plain name forUploadFiles, relative path forUploadFolderFiles) would prevent misuse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/FileUploadedCallback.java` at line 5, The parameter name on FileUploadedCallback.onFileUploaded should be clarified: either rename the parameter from relativePath to a neutral identifier (e.g., fileIdentifier) across the FileUploadedCallback interface and all callers (including UploadFiles and UploadFolderFiles), or add Javadoc to onFileUploaded that states the exact contract (e.g., "plain file name when called from UploadFiles, relative path when called from UploadFolderFiles"); update the interface signature or its Javadoc and adjust/verify all implementations (and callers) such as UploadFiles and UploadFolderFiles to match the new name/contract.presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt (2)
47-49: Unsafe castcontext as Callbackwill produce an unhelpfulClassCastException.If a caller passes a
Contextthat doesn't implementCallback, this crashes without a clear message. Consider adding a guard with a descriptive error.Proposed fix
init { - callback = context as Callback + require(context is Callback) { + "${context.javaClass.simpleName} must implement ResumeUploadDialog.Callback" + } + callback = context }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt` around lines 47 - 49, The init block in ResumeUploadDialog currently does an unsafe cast "callback = context as Callback" which can throw a ClassCastException with no clear message; change this to validate the context first (e.g., check "context is Callback") and either assign via a safe cast or throw an IllegalArgumentException with a descriptive message like "ResumeUploadDialog requires a Context implementing ResumeUploadDialog.Callback" so callers get a clear error; update the init block that sets the callback in the ResumeUploadDialog class accordingly.
22-27: Prefercontext.getString(resId, ...)overString.format(context.getString(resId), ...).
Context.getString(int, Object...)handles format arguments directly and is the idiomatic Android approach.Proposed fix
- .setMessage( - String.format( - context.getString(R.string.dialog_resume_upload_message), - completedCount, - totalCount - ) - ) + .setMessage( + context.getString(R.string.dialog_resume_upload_message, completedCount, totalCount) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt` around lines 22 - 27, Replace the manual String.format call in ResumeUploadDialog (where you call setMessage with String.format(context.getString(R.string.dialog_resume_upload_message), completedCount, totalCount)) with the idiomatic Context.getString overload: call context.getString(R.string.dialog_resume_upload_message, completedCount, totalCount) and pass that to setMessage; this removes the unnecessary String.format wrapper and uses the platform formatting API in the ResumeUploadDialog.kt code path.data/src/main/java/org/cryptomator/data/db/UploadCheckpointDao.java (1)
28-37:insertOrReplaceuses a read-then-write pattern instead of SQLite's native upsert.The
findByVaultId→ conditionalupdate/insertOrThrowpattern is a TOCTOU concern. While the unique index prevents duplicate rows, a concurrent call could cause an uncaughtSQLiteConstraintExceptionfrominsertOrThrow. Since the table already has a unique index onVAULT_ID, you can useinsertWithOnConflictwithCONFLICT_REPLACEto atomically upsert in a single statement.Proposed simplification
public long insertOrReplace(UploadCheckpointEntity entity) { ContentValues values = toContentValues(entity); - UploadCheckpointEntity existing = findByVaultId(entity.getVaultId()); - if (existing != null) { - getDb().update(TABLE_NAME, values, "VAULT_ID = ?", new String[]{existing.getVaultId().toString()}); - return existing.getId(); - } else { - return getDb().insertOrThrow(TABLE_NAME, null, values); - } + return getDb().insertWithOnConflict(TABLE_NAME, null, values, SQLiteDatabase.CONFLICT_REPLACE); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@data/src/main/java/org/cryptomator/data/db/UploadCheckpointDao.java` around lines 28 - 37, Replace the current read-then-write implementation in insertOrReplace(UploadCheckpointEntity) (which calls findByVaultId and conditionally update/insertOrThrow) with a single atomic upsert using getDb().insertWithOnConflict(TABLE_NAME, null, toContentValues(entity), SQLiteDatabase.CONFLICT_REPLACE) so concurrent inserts won’t throw a SQLiteConstraintException; ensure you remove the conditional branch and return the long row id returned by insertWithOnConflict.presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt (1)
1100-1117:onFileUploadCompleted()is called synchronously after dispatching an async upload.Both
uploadFolder(line 1115) anduploadFiles(line 434) callonFileUploadCompleted()immediately aftercryptomatorApp.startFolderUpload/startFileUpload. This clearsuploadLocationand showsProgressModel.COMPLETEDbefore the background service has done any work. If the design intent is "fire and forget to the service," this is fine — but the method nameonFileUploadCompletedis misleading since the upload hasn't completed yet. Consider renaming to clarify intent (e.g.,onUploadDispatched).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt` around lines 1100 - 1117, The call to onFileUploadCompleted() is misleading because uploadFolder and uploadFiles call it immediately after dispatching async work via cryptomatorApp.startFolderUpload and cryptomatorApp.startFileUpload; either rename onFileUploadCompleted to reflect dispatch semantics (e.g., onUploadDispatched) and update all usages, or remove the immediate call and instead invoke the real completion handler when the background service signals completion (hook into the service's callback/event where startFolderUpload/startFileUpload reports results); update references in BrowseFilesPresenter (uploadFolder, uploadFiles) and any observers of onFileUploadCompleted accordingly.presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt (2)
566-590:onResumeUploadConfirmedperforms potentially blocking DB and repository calls on the UI thread.
dao.findByVaultId,applicationComponent().vaultRepository().load(), andapplicationComponent().cloudRepository().decryptedViewOf()are all synchronous calls executed on the calling (main) thread. If the vault repository or cloud repository accesses involve I/O (network, crypto, or database), this will block the UI.Consider wrapping this logic in a background use-case or coroutine, consistent with how other vault-loading flows in this presenter use
DefaultResultHandlercallbacks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 566 - 590, onResumeUploadConfirmed is doing blocking work on the UI thread (dao.findByVaultId, applicationComponent().vaultRepository().load, applicationComponent().cloudRepository().decryptedViewOf); move those synchronous calls into a background coroutine or existing use-case pattern (e.g., DefaultResultHandler) so I/O happens off the main thread, then dispatch back to the main thread to call handleFolderResume/handleFileResume and navigatePendingAfterResume; ensure errors are caught and handled on the background task (preserving the Timber.error + dao.deleteByVaultId behavior) and that any UI interactions occur on the main dispatcher.
540-561:checkForInterruptedUploadperforms a synchronous DAO query on the main thread.
dao.findByVaultId(vault.id)at Line 552 is called from theonSuccesscallback of a use-case result handler, which runs on the main thread. Database access here may cause UI jank or ANR on slower devices.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 540 - 561, checkForInterruptedUpload currently calls dao.findByVaultId(vault.id) on the main thread (called from the onSuccess callback), which can block UI; change it to perform the DB lookup off the main thread (e.g. use an Executor/IO coroutine Dispatcher) and then resume on the main thread to update UI: run findByVaultId and parseJsonArraySize on a background thread, then on the main thread call ResumeUploadDialog.withContext(...).show(...) and set pendingNavigationAfterResume or call navigateToVaultContent(vault, folder) as appropriate; update checkForInterruptedUpload to accept/handle the async result and ensure any UI interactions (ResumeUploadDialog.withContext, pendingNavigationAfterResume, navigateToVaultContent) happen on the main thread.domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt (1)
425-458: Test utility methodsdataSourceWithBytesandbytesare duplicated fromUploadFileTest.kt.These helper methods are identical to those in
UploadFileTest.kt(per the relevant code snippet at lines 95-121). Consider extracting them into a shared test utility to reduce duplication across upload test classes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt` around lines 425 - 458, The methods dataSourceWithBytes(...) and bytes(...) duplicated from UploadFileTest.kt should be extracted into a shared test utility to remove duplication: create a new test helper class (e.g., TestDataSources or UploadTestUtils) containing these two functions (preserving their signatures and behavior), replace the local implementations in UploadFolderFilesTest (and UploadFileTest) with calls/imports to that utility, and update tests to import/use the shared functions instead of defining them locally.presentation/src/main/java/org/cryptomator/presentation/service/UploadNotification.kt (1)
132-138: Notification channel name is hardcoded in English.
NOTIFICATION_CHANNEL_NAMEon Line 136 is a hardcoded English string. It should reference a string resource for proper localization, consistent with how other strings in this class usecontext.getString(...).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadNotification.kt` around lines 132 - 138, The NOTIFICATION_CHANNEL_NAME constant in the companion object is hardcoded in English; change the code to obtain the channel name from a string resource instead of a compile-time constant: add a string resource (e.g. R.string.notification_channel_cryptomator_upload) in strings.xml, remove or replace NOTIFICATION_CHANNEL_NAME so the UploadNotification code that creates the NotificationChannel uses context.getString(R.string.notification_channel_cryptomator_upload) (similar to other uses of context.getString in this class), and update any references to NOTIFICATION_CHANNEL_NAME to use the context-derived string.domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java (1)
29-164: Extract duplicate upload mechanics into a shared base class or utility.Substantial code duplication exists between
UploadFiles.javaandUploadFolderFiles.java: the cancel flag,execute()method with cancellation wrapping,copyDataToFile(), and theupload()overloads are either identical or differ only in parameter threading. ThewriteCloudFile()method also follows the same pattern, with the only difference being whether aCloudFolderparameter is passed or theparentfield is used.Consider extracting these shared mechanics into a common base class or utility helper, keeping only the folder-creation and recursive traversal logic unique to
UploadFolderFiles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java` around lines 29 - 164, There is duplicated upload/cancellation logic between UploadFolderFiles and UploadFiles; extract the shared mechanics (fields: cancelled, cancelledFlag/Flag, fileUploadedCallback if shared; methods: execute(), copyDataToFile(), both upload(...) overloads, writeCloudFile(...), and usage of CancelAwareDataSource.wrap and FileBasedDataSource.from) into a common base class or utility (e.g., AbstractUpload or UploadHelper) that exposes a protected method to perform the low-level write (writeCloudFile) and a protected API to supply folder-specific behavior (e.g., createFolderSafe/recursive traversal in UploadFolderFiles); update UploadFolderFiles to extend/use that base and remove duplicated methods so it only implements folder-creation and traversal (createFolderSafe, uploadFolder) while delegating to the common implementation for actual file copying/writing and cancellation handling.presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (1)
213-239: Remove the unusedcontentResolverUtilfield or expose it through a binder getter.The field at line 42 is initialized in
Binder.init()(line 221) but never referenced elsewhere inUploadService. UnlikeuploadCheckpointDao, which has a getter method,contentResolverUtilis not exposed to callers. Either remove it if it's not needed, or add a getter method if it's intended for use by clients.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 213 - 239, The UploadService class contains an initialized but unused field contentResolverUtil (set in Binder.init()), so either remove the UploadService.contentResolverUtil field and its initialization in Binder.init() if it’s not needed, or add a public accessor in UploadService (e.g., getContentResolverUtil()) and expose it via Binder by adding a corresponding getContentResolverUtil() method that returns UploadService.this.contentResolverUtil; update Binder.init() only if keeping the field to ensure it remains initialized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@data/src/androidTest/java/org/cryptomator/data/db/UpgradeDatabaseTest.kt`:
- Around line 994-1007: The catch currently swallows any Exception after the
duplicate insert, so replace the broad catch with an assertion that the thrown
exception is the expected constraint type: catch specifically for
android.database.sqlite.SQLiteConstraintException (or at minimum
android.database.SQLException) or catch Exception and
assertThat(e).isInstanceOf(SQLiteConstraintException::class.java), referencing
the test block that performs Sql.insertInto("UPLOAD_CHECKPOINT_ENTITY") and the
surrounding try/catch in UpgradeDatabaseTest to ensure only unique-constraint
failures are treated as expected.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 107-114: createFolderSafe currently returns a locally constructed
CloudFolder reference on CloudNodeAlreadyExistsException without validating it
against the server; update the catch block in createFolderSafe to resolve or
verify the folder server-side (e.g., call
cloudContentRepository.resolve(cloudContentRepository.folder(parent,
folderName)) or use an exists() check) and return the resolved/validated
CloudFolder so both success and exception paths perform the same server-side
validation.
In `@presentation/src/main/AndroidManifest.xml`:
- Around line 215-218: Add an explicit android:exported="false" attribute to the
service declarations for .service.UploadService (and the similar
.service.AutoUploadService) in the manifest so the services are explicitly
non-exported; locate the <service android:name=".service.UploadService" ... />
and the <service android:name=".service.AutoUploadService" ... /> entries and
add android:exported="false" alongside the existing attributes.
In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt`:
- Around line 218-220: The current toJsonArray function correctly escapes
strings, but VaultListPresenter.parseJsonArray uses a fragile split(",") which
breaks on values containing commas; replace the manual splitting in
VaultListPresenter.parseJsonArray with a robust JSON parser (e.g., construct an
org.json.JSONArray from the stored string and iterate to extract each element)
so both sides use proper JSON array serialization/deserialization and handle
escaped quotes/commas consistently with toJsonArray.
- Around line 187-195: startFileUpload and startFolderUpload currently no-op
when uploadServiceBinder is null, silently dropping user requests; update both
methods (startFileUpload, startFolderUpload) to check uploadServiceBinder and
when null log a warning (use the class/process logger) and surface user-visible
feedback (e.g., show a Toast or a UI notification / dialog or invoke an error
callback) so the user knows the upload wasn't started, while keeping the
existing call path when uploadServiceBinder is present.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/UploadFolderStructureBuilder.kt`:
- Around line 14-23: When building file entries in UploadFolderStructureBuilder
(inside the child.isFile branch where child.name is checked), don't silently
drop files with null names — log a warning including the document URI and skip
adding the file. Modify the child.name?.let { ... } path to add an else branch
that calls the presenter’s logger (e.g., logger.warn or similar) with a message
like "Skipping file with null name" and the child.uri, then return/skip; leave
the existing structure.addFile/UploadFile.anUploadFile/UriBasedDataSource.from
logic unchanged for the non-null case.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 601-613: The catch block for SecurityException in
VaultListPresenter around the DocumentFile.fromTreeUri(...) /
buildUploadFolderStructure / cryptomatorApp.startFolderUpload flow currently
swallows the exception; update that catch to log the caught SecurityException
(including the exception object) using the presenter’s existing logging facility
(or android.util.Log if none) before falling through to view?.showMessage(...)
and cryptomatorApp.getUploadCheckpointDao()?.deleteByVaultId(vaultId); ensure
the log entry includes context (e.g., "resume upload: SecurityException while
accessing tree uri") and the exception so diagnostics preserve stacktrace.
- Around line 655-660: The current parseJsonArray function hand-parses JSON and
breaks on values with commas or quotes; replace its implementation to use
org.json.JSONArray for robust parsing: in parseJsonArray(json: String?) (used
for pendingFileUris), return an empty list for null/empty inputs, construct a
JSONArray(json), iterate its elements and collect each element.toString() into a
List<String>, and handle JSONException by returning emptyList() or propagating
as appropriate; this ensures URIs containing commas/quotes are parsed correctly.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadNotification.kt`:
- Around line 64-81: The notification shows an off-by-one when the last file
finishes because update() always displays completedFiles + 1 and
updateFinishedFile() increments completedFiles before calling update(); fix by
clamping the displayed index inside update() (e.g., compute displayIndex =
min(completedFiles + 1, totalFiles)) and use that displayIndex in the format
call so the builder in update() never shows a number greater than totalFiles;
keep updateFinishedFile() as-is (it still increments completedFiles) so the
clamped logic in update() prevents "6 of 5" glitches.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 149-159: The throttling fields startTimeNotificationDelay and
elapsedTimeNotificationDelay used in updateNotification are accessed from both
the worker thread and the Handler.post lambda on the main thread without
synchronization; make these fields volatile (or move the entire throttling logic
into the main-thread Runnable) so reads/writes are always visible across threads
and eliminate the race condition in updateNotification.
- Around line 57-66: There is a race where cancelCallback is assigned inside the
worker lambda (in startFileUpload/startFolderUpload via
UploadFiles/UploadFolderFiles) after worker.start(), so a cancel that arrives
earlier never propagates; fix by assigning cancelCallback before calling execute
and immediately invoking it if the service-level cancelled flag is already true:
in startFileUpload and the analogous startFolderUpload, set cancelCallback =
uploadFiles::onCancel (or uploadFolderFiles::onCancel) prior to calling
uploadFiles.execute(progressAware) (and after constructing the upload object),
then if (cancelled) call cancelCallback.run()/onCancel() to propagate any prior
cancel; also ensure onStartCommand still only toggles the cancelled flag and
calls cancelCallback if non-null.
- Around line 79-117: startUpload currently overwrites shared mutable fields
(worker, notification, cancelCallback, cancelled) unconditionally so concurrent
calls corrupt state; add a guard at the start of startUpload that checks the
existing worker (e.g., if worker != null && worker.isAlive()) and either rejects
the new upload or enqueues it, and protect mutations of worker, notification,
cancelCallback and cancelled with a short synchronized block (or use an
AtomicReference for worker) so only one thread can start and set those fields;
also ensure the finally block in the worker clears/resets these shared fields
(worker = null, notification = null, cancelCallback = null, cancelled = false)
so subsequent uploads can proceed.
In `@presentation/src/main/res/values/strings.xml`:
- Line 635: The string resource notification_upload_finished_message uses a
singular/plural-sensitive count but is defined as a plain <string>, producing "1
files uploaded to vault"; replace it with a <plurals
name="notification_upload_finished_message"> providing quantity entries for
"one" (e.g., "1 file uploaded to vault") and "other" (e.g., "%d files uploaded
to vault"), and update any callers to fetch it via
getResources().getQuantityString(R.plurals.notification_upload_finished_message,
count, count) or the Kotlin equivalent so the correct form is chosen at runtime.
- Line 644: The string resource dialog_resume_upload_message uses a
plural-dependent phrase ("%1$d of %2$d files completed") that renders
incorrectly for singular counts; update this resource by either rewording to a
neutral form (e.g., "%1$d of %2$d completed" or "Upload interrupted (%1$d of
%2$d items completed). Resume?") or replace the inner "files completed" fragment
with a <plurals> resource and reference it so singular/plural forms are handled
correctly; locate the string by its resource name dialog_resume_upload_message
and implement the chosen change in strings.xml and, if using plurals, add the
corresponding <plurals> entry and use getQuantityString where the resource is
consumed.
---
Duplicate comments:
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 150-162: The copyDataToFile method leaks the temporary File
created by createTempFile("upload","tmp",dir) when opening streams or
StreamHelper.copy(...) throws; modify copyDataToFile to use try-with-resources
for the InputStream (from CancelAwareDataSource.wrap(...).open(context)) and
OutputStream (new FileOutputStream(target)) and ensure that if any exception
occurs after creating target you delete target (target.delete() or
Files.deleteIfExists) before rethrowing the FatalBackendException; keep the
modifiedDate handling (dataSource.modifiedDate(...).ifPresent(...)) after a
successful copy so the temp file is only touched when the copy completed.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1087-1098: The background Thread that calls
buildUploadFolderStructure can throw and never reach
view?.showProgress(ProgressModel.COMPLETED) or call uploadFolder, and
activity().runOnUiThread is not lifecycle-safe; wrap the background work in
try/catch/finally, catch relevant exceptions (e.g.,
SecurityException/Exception), and in finally always post back to the UI to call
view?.showProgress(ProgressModel.COMPLETED) (and show an error via
view?.showMessage when appropriate), and use a lifecycle-safe UI post (e.g.,
check activity() is non-null and use activity()?.runOnUiThread or a safe
dispatcher) before calling uploadFolder(folderStructure, treeUri) so the UI
update and upload invocation always happen safely even on errors or destroyed
activity.
---
Nitpick comments:
In
`@data/src/main/java/org/cryptomator/data/db/entities/UploadCheckpointEntity.java`:
- Around line 33-37: Remove the redundant `@NotNull` annotations from the
primitive fields in UploadCheckpointEntity: delete `@NotNull` on the fields
totalFileCount (int) and timestamp (long) since primitives cannot be null and
GreenDAO already generates NOT NULL columns for them; leave `@NotNull` on
non-primitive fields if present and adjust any import usage if that annotation
becomes unused.
In `@data/src/main/java/org/cryptomator/data/db/UploadCheckpointDao.java`:
- Around line 28-37: Replace the current read-then-write implementation in
insertOrReplace(UploadCheckpointEntity) (which calls findByVaultId and
conditionally update/insertOrThrow) with a single atomic upsert using
getDb().insertWithOnConflict(TABLE_NAME, null, toContentValues(entity),
SQLiteDatabase.CONFLICT_REPLACE) so concurrent inserts won’t throw a
SQLiteConstraintException; ensure you remove the conditional branch and return
the long row id returned by insertWithOnConflict.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/FileUploadedCallback.java`:
- Line 5: The parameter name on FileUploadedCallback.onFileUploaded should be
clarified: either rename the parameter from relativePath to a neutral identifier
(e.g., fileIdentifier) across the FileUploadedCallback interface and all callers
(including UploadFiles and UploadFolderFiles), or add Javadoc to onFileUploaded
that states the exact contract (e.g., "plain file name when called from
UploadFiles, relative path when called from UploadFolderFiles"); update the
interface signature or its Javadoc and adjust/verify all implementations (and
callers) such as UploadFiles and UploadFolderFiles to match the new
name/contract.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 29-164: There is duplicated upload/cancellation logic between
UploadFolderFiles and UploadFiles; extract the shared mechanics (fields:
cancelled, cancelledFlag/Flag, fileUploadedCallback if shared; methods:
execute(), copyDataToFile(), both upload(...) overloads, writeCloudFile(...),
and usage of CancelAwareDataSource.wrap and FileBasedDataSource.from) into a
common base class or utility (e.g., AbstractUpload or UploadHelper) that exposes
a protected method to perform the low-level write (writeCloudFile) and a
protected API to supply folder-specific behavior (e.g.,
createFolderSafe/recursive traversal in UploadFolderFiles); update
UploadFolderFiles to extend/use that base and remove duplicated methods so it
only implements folder-creation and traversal (createFolderSafe, uploadFolder)
while delegating to the common implementation for actual file copying/writing
and cancellation handling.
In
`@domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt`:
- Around line 425-458: The methods dataSourceWithBytes(...) and bytes(...)
duplicated from UploadFileTest.kt should be extracted into a shared test utility
to remove duplication: create a new test helper class (e.g., TestDataSources or
UploadTestUtils) containing these two functions (preserving their signatures and
behavior), replace the local implementations in UploadFolderFilesTest (and
UploadFileTest) with calls/imports to that utility, and update tests to
import/use the shared functions instead of defining them locally.
In
`@presentation/src/main/java/org/cryptomator/presentation/di/component/ApplicationComponent.java`:
- Line 7: ApplicationComponent currently exposes the data-layer
UploadCheckpointDao directly (symbol: UploadCheckpointDao) which leaks the data
layer into presentation; add a domain-level abstraction
UploadCheckpointRepository in the domain module, implement it in the data module
by adapting UploadCheckpointDao, and change ApplicationComponent to provide
UploadCheckpointRepository instead of UploadCheckpointDao (update provider
method signatures and any injection sites to use UploadCheckpointRepository) so
the presentation layer depends only on the domain interface.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1100-1117: The call to onFileUploadCompleted() is misleading
because uploadFolder and uploadFiles call it immediately after dispatching async
work via cryptomatorApp.startFolderUpload and cryptomatorApp.startFileUpload;
either rename onFileUploadCompleted to reflect dispatch semantics (e.g.,
onUploadDispatched) and update all usages, or remove the immediate call and
instead invoke the real completion handler when the background service signals
completion (hook into the service's callback/event where
startFolderUpload/startFileUpload reports results); update references in
BrowseFilesPresenter (uploadFolder, uploadFiles) and any observers of
onFileUploadCompleted accordingly.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/UploadFolderStructureBuilder.kt`:
- Around line 11-13: The recursive call in buildUploadFolderStructure has no
depth guard and can overflow the stack; add a max-depth safety check (e.g.,
MAX_UPLOAD_DEPTH constant) and thread a currentDepth parameter (or convert to an
iterative traversal) so that buildUploadFolderStructure(child) is only called
when currentDepth < MAX_UPLOAD_DEPTH; when the limit is reached, skip adding
deeper subfolders (or log a warning) instead of recursing, ensuring
addSubfolder(...) only receives built structures within the allowed depth.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 566-590: onResumeUploadConfirmed is doing blocking work on the UI
thread (dao.findByVaultId, applicationComponent().vaultRepository().load,
applicationComponent().cloudRepository().decryptedViewOf); move those
synchronous calls into a background coroutine or existing use-case pattern
(e.g., DefaultResultHandler) so I/O happens off the main thread, then dispatch
back to the main thread to call handleFolderResume/handleFileResume and
navigatePendingAfterResume; ensure errors are caught and handled on the
background task (preserving the Timber.error + dao.deleteByVaultId behavior) and
that any UI interactions occur on the main dispatcher.
- Around line 540-561: checkForInterruptedUpload currently calls
dao.findByVaultId(vault.id) on the main thread (called from the onSuccess
callback), which can block UI; change it to perform the DB lookup off the main
thread (e.g. use an Executor/IO coroutine Dispatcher) and then resume on the
main thread to update UI: run findByVaultId and parseJsonArraySize on a
background thread, then on the main thread call
ResumeUploadDialog.withContext(...).show(...) and set
pendingNavigationAfterResume or call navigateToVaultContent(vault, folder) as
appropriate; update checkForInterruptedUpload to accept/handle the async result
and ensure any UI interactions (ResumeUploadDialog.withContext,
pendingNavigationAfterResume, navigateToVaultContent) happen on the main thread.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadNotification.kt`:
- Around line 132-138: The NOTIFICATION_CHANNEL_NAME constant in the companion
object is hardcoded in English; change the code to obtain the channel name from
a string resource instead of a compile-time constant: add a string resource
(e.g. R.string.notification_channel_cryptomator_upload) in strings.xml, remove
or replace NOTIFICATION_CHANNEL_NAME so the UploadNotification code that creates
the NotificationChannel uses
context.getString(R.string.notification_channel_cryptomator_upload) (similar to
other uses of context.getString in this class), and update any references to
NOTIFICATION_CHANNEL_NAME to use the context-derived string.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 213-239: The UploadService class contains an initialized but
unused field contentResolverUtil (set in Binder.init()), so either remove the
UploadService.contentResolverUtil field and its initialization in Binder.init()
if it’s not needed, or add a public accessor in UploadService (e.g.,
getContentResolverUtil()) and expose it via Binder by adding a corresponding
getContentResolverUtil() method that returns
UploadService.this.contentResolverUtil; update Binder.init() only if keeping the
field to ensure it remains initialized.
In
`@presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt`:
- Around line 47-49: The init block in ResumeUploadDialog currently does an
unsafe cast "callback = context as Callback" which can throw a
ClassCastException with no clear message; change this to validate the context
first (e.g., check "context is Callback") and either assign via a safe cast or
throw an IllegalArgumentException with a descriptive message like
"ResumeUploadDialog requires a Context implementing ResumeUploadDialog.Callback"
so callers get a clear error; update the init block that sets the callback in
the ResumeUploadDialog class accordingly.
- Around line 22-27: Replace the manual String.format call in ResumeUploadDialog
(where you call setMessage with
String.format(context.getString(R.string.dialog_resume_upload_message),
completedCount, totalCount)) with the idiomatic Context.getString overload: call
context.getString(R.string.dialog_resume_upload_message, completedCount,
totalCount) and pass that to setMessage; this removes the unnecessary
String.format wrapper and uses the platform formatting API in the
ResumeUploadDialog.kt code path.
In `@presentation/src/main/res/values/strings.xml`:
- Around line 630-640: Update the notification string named
notification_upload_message to include a unit label so the progress is
unambiguous (e.g., change the text from "Uploading %1$d/%2$d" to "Uploading
%1$d/%2$d files"); locate notification_upload_message in the strings.xml near
the other upload strings and mirror the phrasing used by
dialog_upload_file_remaining to keep consistency with finished/failed messages.
data/src/androidTest/java/org/cryptomator/data/db/UpgradeDatabaseTest.kt
Show resolved
Hide resolved
domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadNotification.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (1)
99-128: Worker thread fields not cleared infinally— staleworker.isAlive()check on terminated thread.After the worker thread exits,
workerstill references the completedThreadobject. WhileisAlive()returnsfalsefor a terminated thread so the guard at line 87 still works, settingworker = nullinfinallymakes the intent clearer and avoids holding a reference to a dead thread.Proposed fix
} finally { + cancelCallback = null; + worker = null; stopForeground(STOP_FOREGROUND_DETACH); stopSelf(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 99 - 128, The worker Thread field is left referencing a terminated Thread, which can be misleading and keeps a stale object; in the finally block of the run lambda inside UploadService (the code that calls stopForeground(STOP_FOREGROUND_DETACH) and stopSelf()), set the UploadService.worker field to null so the terminated Thread is cleared (ensure this assignment happens after any final cleanup so existing isAlive() checks continue to behave correctly).presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt (1)
568-592: DB and repository calls on main thread inonResumeUploadConfirmed.Lines 571 (
dao.findByVaultId), 574 (vaultRepository().load), and 582 (cloudRepository().decryptedViewOf) all perform I/O on the main thread. While these are likely fast for single-record lookups, they still risk ANR under contention. Consider wrapping the heavy lifting in a background thread if this becomes a problem in practice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 568 - 592, onResumeUploadConfirmed is doing blocking I/O on the main thread (dao.findByVaultId, applicationComponent().vaultRepository().load, applicationComponent().cloudRepository().decryptedViewOf); move those calls to a background dispatcher (e.g., launch a coroutine on Dispatchers.IO) and then switch back to the main thread only for UI work (calling handleFolderResume/handleFileResume and navigatePendingAfterResume). Specifically, perform dao.findByVaultId, vaultRepository().load and cloudRepository().decryptedViewOf inside the IO block, capture results/exceptions there (preserve the Timber.e and dao.deleteByVaultId behavior on error), and then resume on the main dispatcher to call handleFolderResume or handleFileResume and navigatePendingAfterResume.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@data/src/androidTest/java/org/cryptomator/data/db/UpgradeDatabaseTest.kt`:
- Line 945: The test in UpgradeDatabaseTest.kt inserts a column value for
"PENDING_FILE_URIS" (insert .text("PENDING_FILE_URIS", null)) but the subsequent
assertion block (the expectation checks around the same test that validate the
upgraded schema/row) does not assert that column, leaving schema changes
undetected; update the assertion block in the same test to include an assertion
for the PENDING_FILE_URIS column (expecting null or the intended default) so the
test explicitly verifies the presence and value of that column after upgrade.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 657-665: The JSONException in parseJsonArray is currently
swallowed; change the catch block to log the exception for diagnostics before
returning emptyList(). Locate the parseJsonArray function and in the catch (e:
JSONException) block call the class's logger (e.g., logger.error or LOG.error)
or an appropriate logging facility to record a clear message like "Failed to
parse JSON array for checkpoints" and include the exception e, then return
emptyList().
- Around line 594-615: handleFolderResume is calling
buildUploadFolderStructure(documentFile) on the main thread which can block UI;
move the folder traversal off the UI thread (e.g., dispatch
buildUploadFolderStructure and subsequent startFolderUpload to a background
thread/Coroutine/Executor) and invoke view updates and navigation only from the
main thread when the background work completes. Specifically: in
handleFolderResume, instead of calling buildUploadFolderStructure synchronously,
submit a background task that calls buildUploadFolderStructure(documentFile) and
then calls cryptomatorApp.startFolderUpload(...) on completion; ensure any UI
calls (view?.showMessage(...) and navigatePendingAfterResume() from
onResumeUploadConfirmed) are executed only after the background task finishes
(e.g., by invoking navigatePendingAfterResume in the background task's success
callback on the UI thread) and preserve the existing SecurityException handling
and checkpoint deletion flow.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 154-158: The manual JSON serialization in toJsonArray(Set<String>
items) is fragile (only escapes quotes); replace it with org.json.JSONArray
usage: create a new JSONArray, iterate the items Set (or pass a Collection) and
put each string into the JSONArray so JSON escaping is handled correctly, then
return the JSONArray.toString(); update the toJsonArray method to use JSONArray
to produce the array string so it matches VaultListPresenter.parseJsonArray's
parsing.
- Around line 160-170: The first progress update is skipped because
elapsedTimeNotificationDelay starts at 0 so updateNotification() takes the else
branch; fix by initializing the delay state so the first call posts
immediately—set elapsedTimeNotificationDelay to a value >200 (e.g.
Long.MAX_VALUE or 201) or initialize startTimeNotificationDelay to
System.currentTimeMillis() at construction so the first call into
updateNotification() (method updateNotification) enters the post branch and
calls notification.update(asPercentage); update usages of
startTimeNotificationDelay and elapsedTimeNotificationDelay accordingly.
---
Duplicate comments:
In `@data/src/androidTest/java/org/cryptomator/data/db/UpgradeDatabaseTest.kt`:
- Around line 1005-1007: Detekt flags the empty catch of
android.database.sqlite.SQLiteConstraintException in UpgradeDatabaseTest as a
swallowed exception, but the empty body is intentional for this
constraint-violation test; suppress the warning by adding a
`@Suppress`("SwallowedException") annotation to the test method (or the
UpgradeDatabaseTest class) that contains the catch, keeping the catch as-is for
the assertion semantics.
In
`@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFolderFiles.java`:
- Around line 107-114: createFolderSafe currently returns a locally-constructed
CloudFolder reference in the catch block which lacks server-side validation;
change the catch for CloudNodeAlreadyExistsException in createFolderSafe to
retrieve and return the actual server-backed node from cloudContentRepository
(e.g. call the repository's lookup method such as
cloudContentRepository.find(...) or cloudContentRepository.getByPath(...) for
parent and folderName) instead of returning cloudContentRepository.folder(...),
and if that lookup fails throw a BackendException so callers always receive a
validated server-backed CloudFolder.
In `@presentation/src/main/res/values/strings.xml`:
- Line 647: The string resource dialog_resume_upload_message currently embeds a
plural-sensitive fragment "(%1$d of %2$d files completed)" which renders
awkwardly for singular counts; update the resource identified by name
"dialog_resume_upload_message" to use a plural-safe phrasing (for example
"%1$d/%2$d completed" or use Android plurals with <plurals> and reference the
correct quantity for the first parameter) so the displayed text is grammatically
correct for both singular and plural counts.
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 568-592: onResumeUploadConfirmed is doing blocking I/O on the main
thread (dao.findByVaultId, applicationComponent().vaultRepository().load,
applicationComponent().cloudRepository().decryptedViewOf); move those calls to a
background dispatcher (e.g., launch a coroutine on Dispatchers.IO) and then
switch back to the main thread only for UI work (calling
handleFolderResume/handleFileResume and navigatePendingAfterResume).
Specifically, perform dao.findByVaultId, vaultRepository().load and
cloudRepository().decryptedViewOf inside the IO block, capture
results/exceptions there (preserve the Timber.e and dao.deleteByVaultId behavior
on error), and then resume on the main dispatcher to call handleFolderResume or
handleFileResume and navigatePendingAfterResume.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 99-128: The worker Thread field is left referencing a terminated
Thread, which can be misleading and keeps a stale object; in the finally block
of the run lambda inside UploadService (the code that calls
stopForeground(STOP_FOREGROUND_DETACH) and stopSelf()), set the
UploadService.worker field to null so the terminated Thread is cleared (ensure
this assignment happens after any final cleanup so existing isAlive() checks
continue to behave correctly).
data/src/androidTest/java/org/cryptomator/data/db/UpgradeDatabaseTest.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt (1)
206-221:createUploadCheckpointsilently no-ops when binder is null — upload proceeds without checkpoint protection.If
uploadServiceBinderis null at Line 208, the method returns without logging or feedback. The upload will still start (viastartFileUpload/startFolderUploadwhich log the drop), but if only the checkpoint fails while the binder becomes available moments later for the actual upload, files could be uploaded without any resume capability.Add a log statement consistent with the pattern used in the upload methods:
Proposed fix
fun createUploadCheckpoint(vaultId: Long, type: String, targetFolderPath: String, sourceFolderUri: String?, sourceFolderName: String?, pendingFileUris: List<String>, totalFileCount: Int) { - val dao = uploadServiceBinder?.uploadCheckpointDao ?: return + val dao = uploadServiceBinder?.uploadCheckpointDao ?: run { + Timber.tag("App").e("Upload service not connected, checkpoint creation skipped") + return + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt` around lines 206 - 221, createUploadCheckpoint currently returns silently when uploadServiceBinder is null (uploadServiceBinder?.uploadCheckpointDao ?: return), which can let uploads proceed without checkpoint protection; change that early-return to log a warning/error using the same logger used by startFileUpload/startFolderUpload (e.g., log.warn or log.info as those methods do) including vaultId, type and targetFolderPath for context, then return; update the null-check around uploadServiceBinder/uploadCheckpointDao in createUploadCheckpoint to emit that log before returning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 134-150: createCheckpointCallback currently initializes
uploadedSoFar as an empty set which drops previously-completed files across
resumes; change createCheckpointCallback to accept the existing completedFiles
(e.g., Set<String> completedFiles) and initialize uploadedSoFar from a defensive
copy of that set, then update all call sites that invoke
createCheckpointCallback(vaultId) to pass the session's completedFiles; keep the
rest of the callback logic (toJsonArray,
uploadCheckpointDao.updateCompletedFiles, Handler/notification) unchanged.
---
Duplicate comments:
In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt`:
- Around line 227-229: toJsonArray currently does manual string building and
only escapes double quotes, missing backslashes and other JSON-escaping rules;
replace its implementation in the CryptomatorApp.toJsonArray function to use
org.json.JSONArray for proper serialization (construct a new JSONArray(items)
and return its toString()), so the produced JSON matches the parser and
correctly escapes special characters.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 152-156: The toJsonArray method currently hand-escapes only double
quotes and is fragile; replace its manual serialization with proper JSON
construction using org.json.JSONArray: create a new JSONArray, add each element
from the Set<String> (preserving strings as-is so JSONArray handles escaping),
and return JSONArray.toString() from toJsonArray to ensure correct escaping of
backslashes and all JSON-special characters (update any imports and references
to toJsonArray accordingly).
- Line 47: The elapsedTimeNotificationDelay field in UploadService is
initialized to 0L which prevents the very first progress notification from
firing; change its initialization so it looks like a past timestamp older than
the progress update threshold (e.g., set elapsedTimeNotificationDelay to
System.currentTimeMillis() - (PROGRESS_UPDATE_THRESHOLD_MS + 1) or equivalent)
so the first callback treats the last-notified time as sufficiently old and
sends an immediate update; apply the same change to the other occurrence used
around the progress-notification logic (the block mentioned at 158-168) so both
places use the past-timestamp initialization.
---
Nitpick comments:
In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt`:
- Around line 206-221: createUploadCheckpoint currently returns silently when
uploadServiceBinder is null (uploadServiceBinder?.uploadCheckpointDao ?:
return), which can let uploads proceed without checkpoint protection; change
that early-return to log a warning/error using the same logger used by
startFileUpload/startFolderUpload (e.g., log.warn or log.info as those methods
do) including vaultId, type and targetFolderPath for context, then return;
update the null-check around uploadServiceBinder/uploadCheckpointDao in
createUploadCheckpoint to emit that log before returning.
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt (1)
7-9: Consider a safer callback binding thancontext as Callback.The unchecked cast on line 9 throws a
ClassCastExceptionat construction with no context-specific message ifcontextdoesn't implementCallback. The error is no more descriptive than "VaultListActivity cannot be cast to ResumeUploadDialog$Callback", making future misuse hard to diagnose.Suggested improvement
- private val callback: Callback = context as Callback + private val callback: Callback = requireNotNull(context as? Callback) { + "${context.javaClass.simpleName} must implement ResumeUploadDialog.Callback" + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt` around lines 7 - 9, The constructor for ResumeUploadDialog is performing an unsafe cast using "context as Callback" which can throw an opaque ClassCastException; update the binding in the ResumeUploadDialog constructor to perform a safe check and throw a clear, descriptive exception (or accept a Callback parameter) instead of relying on the unchecked cast—e.g., use a safe cast (context as? Callback) and if null throw IllegalArgumentException with a message naming ResumeUploadDialog and Callback, or change the constructor signature to accept a Callback directly; update any callers accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1080-1083: The code currently calls
context().contentResolver.takePersistableUriPermission(treeUri,
Intent.FLAG_GRANT_READ_URI_PERMISSION) and DocumentFile.fromTreeUri(context(),
treeUri) before the background thread's try/catch, which can throw and crash the
presenter; wrap the takePersistableUriPermission call in a
try/catch(SecurityException) and handle the exception (e.g., log and continue
without persisting the grant), and wrap DocumentFile.fromTreeUri in a
try/catch(Exception) to handle any creation errors (e.g., log, show user
feedback, and return early) so that lastSelectedFolderUri and downstream logic
only proceed when these operations succeed.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 569-593: In onResumeUploadConfirmed, the early returns when dao is
null or checkpoint is null leave pendingNavigationAfterResume set and never
consumed; update the two early-return paths (the checks of
getUploadCheckpointDao() and dao.findByVaultId(vaultId)) to call
navigatePendingAfterResume() before returning so pendingNavigationAfterResume is
cleared/consumed (keep the existing Timber/error handling behavior for the vault
load failure path which already calls navigatePendingAfterResume()); this
targets the onResumeUploadConfirmed function and variables cryptomatorApp, dao,
checkpoint, and pendingNavigationAfterResume.
- Around line 604-616: The current try block in handleFolderResume only catches
SecurityException so other exceptions from buildUploadFolderStructure propagate;
change the catch to catch Exception (or add an additional catch (e: Exception))
around DocumentFile.fromTreeUri/buildUploadFolderStructure and handle it the
same way as the SecurityException (log via Timber.tag("VaultListPresenter").w(e,
...), call view?.showMessage(R.string.dialog_resume_upload_permission_lost), and
dao.deleteByVaultId(vaultId)); ensure after the exception handling you still
invoke navigatePendingAfterResume() (same flow as when
BrowseFilesPresenter.selectedFolder handles errors) so the resume flow continues
correctly.
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 658-666: In parseJsonArray, don't swallow the caught JSONException
silently; update the catch block to log the exception (including the exception
object and a clear message like "Failed to parse JSON array in parseJsonArray")
using the class's logger (or create one if absent) and then return emptyList()
as the safe fallback; keep the method signature and behavior otherwise unchanged
so malformed checkpoint data is recorded for diagnostics while still returning
an empty list.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Line 47: The field elapsedTimeNotificationDelay initialized to 0 causes
updateNotification to treat the first call as if a previous timestamp exists and
skip posting; change the logic so the first update always posts by initializing
elapsedTimeNotificationDelay to a sentinel (e.g., -1) or update
updateNotification to check for a "no previous value" case (compare against
sentinel) and always post on the first call, then set/update
elapsedTimeNotificationDelay after posting; apply the same fix to the other
similar block referenced (the logic in the second updateNotification-like
section).
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/ui/dialog/ResumeUploadDialog.kt`:
- Around line 7-9: The constructor for ResumeUploadDialog is performing an
unsafe cast using "context as Callback" which can throw an opaque
ClassCastException; update the binding in the ResumeUploadDialog constructor to
perform a safe check and throw a clear, descriptive exception (or accept a
Callback parameter) instead of relying on the unchecked cast—e.g., use a safe
cast (context as? Callback) and if null throw IllegalArgumentException with a
message naming ResumeUploadDialog and Callback, or change the constructor
signature to accept a Callback directly; update any callers accordingly.
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (1)
domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt (1)
76-91: Test assertion doesn't fully verify the claim in the test name.The test is titled "copy closes both streams when write throws IOException" but only verifies
output.close(). WhileByteArrayInputStream.close()is a no-op so functionality isn't affected, consider using a mock forinputto verify both streams are closed, keeping the test consistent with the others.♻️ Suggested improvement for consistency
`@Test` `@DisplayName`("copy closes both streams when write throws IOException") fun copyClosesStreamsOnWriteError() { - val data = byteArrayOf(1, 2, 3) - val input = ByteArrayInputStream(data) + val input: InputStream = mock { + on { read(any<ByteArray>()) } doReturn 3 doReturn -1 + } val output: OutputStream = mock { on { write(any<ByteArray>(), any(), any()) } doThrow IOException("write failed") } try { StreamHelper.copy(input, output) } catch (_: IOException) { } + verify(input).close() verify(output).close() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt` around lines 76 - 91, The test copyClosesStreamsOnWriteError claims StreamHelper.copy closes both streams but only verifies output.close(); replace the concrete ByteArrayInputStream with a mocked InputStream (or spy) so you can verify input.close() as well, keep the output mocked to throw IOException from write, call StreamHelper.copy(input, output) inside the try/catch as before, and add verify(input).close() in addition to verify(output).close() to assert both streams are closed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 668-676: The JSONException in parseJsonArray is being swallowed;
modify the catch block to log the error (including the exception and the
offending json string) before returning emptyList() so production diagnostics
are preserved; update VaultListPresenter.kt's parseJsonArray catch to call the
class logger (e.g., logger.warn or logger.error) with a message like "Failed to
parse checkpoint JSON" plus the json value and pass the caught JSONException to
the logger; if no logger exists in the class, add a private logger instance and
use it for the log call.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 46-48: The first progress notification is skipped because
elapsedTimeNotificationDelay is initialized to 0L so the first
updateNotification comparison fails; change the initialization of
elapsedTimeNotificationDelay in UploadService to a sentinel (e.g., -1L) and
adjust updateNotification to treat a negative elapsedTimeNotificationDelay as
"never posted" so the first call will post immediately (referencing
elapsedTimeNotificationDelay, startTimeNotificationDelay and the
updateNotification method/cancelCallback logic to locate where to change the
initialization and conditional).
In `@presentation/src/main/res/values/strings.xml`:
- Around line 647-648: Replace the plural-sensitive phrasing with a neutral,
numeric format by updating the string resource dialog_resume_upload_message to
use "%1$d/%2$d files uploaded" (already present) and keep
dialog_resume_upload_title as "Resume upload?"; ensure
dialog_resume_upload_message is the canonical resource used by the resume upload
flow (e.g., the code paths that previously used plural resources now reference
dialog_resume_upload_message) so the UI always shows the neutral "%1$d/%2$d
files uploaded" form.
---
Nitpick comments:
In
`@domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt`:
- Around line 76-91: The test copyClosesStreamsOnWriteError claims
StreamHelper.copy closes both streams but only verifies output.close(); replace
the concrete ByteArrayInputStream with a mocked InputStream (or spy) so you can
verify input.close() as well, keep the output mocked to throw IOException from
write, call StreamHelper.copy(input, output) inside the try/catch as before, and
add verify(input).close() in addition to verify(output).close() to assert both
streams are closed.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java (1)
129-160: Missing test coverage for newVaultListPresenterbehaviors.The four new mocks (
uploadCheckpointDao,vaultRepository,cloudRepository,contentResolverUtil) are never stubbed or verified anywhere in the file, which confirms that none of the new presenter functionality introduced by this PR is exercised here. Specifically, the following paths are untested:
handleFolderResume: checkpoint lookup → resume-or-cancel dialog → folder-structure rebuilding.- Upload-checkpoint detection on vault unlock (the flow that queries
uploadCheckpointDaoand prompts the user).- Folder-upload initiation through
contentResolverUtil-backed SAF traversal.Given that
handleFolderResumeis already documented as intentionally running on the main thread (with a known ANR risk caveat), a unit test covering at least the happy path and the "no pending checkpoints" early-exit would be valuable.Would you like me to draft test cases for
handleFolderResumeand the checkpoint-detection-on-unlock flow?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java` around lines 129 - 160, The test class lacks coverage for new VaultListPresenter behavior: add unit tests that exercise VaultListPresenter.handleFolderResume and the upload-checkpoint detection triggered on vault unlock by stubbing and verifying the new mocks (uploadCheckpointDao, vaultRepository, cloudRepository, contentResolverUtil). For handleFolderResume add one test for the happy path where uploadCheckpointDao returns a checkpoint, verify the presenter shows the resume-or-cancel dialog and that folder-structure rebuilding methods on cloudRepository/vaultRepository/contentResolverUtil are invoked, and another test where uploadCheckpointDao returns no checkpoint to verify the early exit (no dialog, no rebuild). For the unlock flow add a test that stubs uploadCheckpointDao to return a checkpoint when the presenter unlocks a vault (simulate the unlock call), assert the dialog prompt and subsequent resume initiation, and a negative case where no checkpoint is found; use when(...).thenReturn(...) and verify(...) against the presenter methods and the mocks added in setup.domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt (1)
59-74: Consider asserting thatcopyactually propagates theIOException.Both error-path tests swallow exceptions silently. If
StreamHelper.copywere to suppress read/write errors (rather than rethrowing), these tests would still pass while the production contract would be broken. Replacing the baretry/catchwithassertThrowsmakes the propagation contract explicit:♻️ Proposed refactor — assert exception is thrown
`@Test` `@DisplayName`("copy closes both streams when read throws IOException") fun copyClosesStreamsOnReadError() { val input: InputStream = mock { on { read(any<ByteArray>()) } doThrow IOException("read failed") } val output: OutputStream = mock() - try { - StreamHelper.copy(input, output) - } catch (_: IOException) { - } + assertThrows<IOException> { + StreamHelper.copy(input, output) + } verify(input).close() verify(output).close() }Apply the same pattern to
copyClosesStreamsOnWriteError.Also applies to: 76-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt` around lines 59 - 74, The tests copyClosesStreamsOnReadError and copyClosesStreamsOnWriteError currently swallow IOExceptions and therefore don't assert that StreamHelper.copy actually propagates errors; replace the bare try/catch with an assertion that an IOException is thrown (e.g., use assertThrows/Assertions.assertThrows for IOException while calling StreamHelper.copy(input, output)), and retain the verify(input).close() and verify(output).close() assertions so you both confirm the exception is propagated and the streams are closed.presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt (1)
1158-1160:toJsonArrayis now duplicated in at least three places.This hand-rolled JSON array serializer also exists in
UploadService.javaandCryptomatorApp.kt(per prior context). Extracting it to a shared utility (e.g., a top-levelfun toUploadJsonArray(items: Iterable<String>): String) would eliminate the duplication and ensure a single place to update if the escaping logic ever needs to change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt` around lines 1158 - 1160, The toJsonArray implementation in BrowseFilesPresenter.toJsonArray is duplicated elsewhere; extract it into a single shared top-level utility function (e.g., fun toUploadJsonArray(items: Iterable<String>): String) placed in a common util file, implement the same escaping logic there, then replace calls to BrowseFilesPresenter.toJsonArray and the equivalent implementations in UploadService.java and CryptomatorApp.kt to call toUploadJsonArray(items); ensure the signature uses Iterable<String> for broad compatibility and keep the exact escaping behavior (replace "\"" with "\\\"") so callers' behavior remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt`:
- Around line 76-91: The test copyClosesStreamsOnWriteError currently uses a
real ByteArrayInputStream so Mockito cannot verify input.close(); change the
test's input to a mocked InputStream (mocking its read(...) behavior to simulate
returning the byte array then EOF) so you can assert both streams are closed,
then add verify(input).close() alongside verify(output).close() after calling
StreamHelper.copy to ensure input closure is tested when write throws.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 431-448: The guard using return@let can exit without calling
cryptomatorApp.unSuspendLock(), leaving the vault suspended; update the lambda
in BrowseFilesPresenter (the block that reads uploadLocation?.let { ... }) to
call cryptomatorApp.unSuspendLock() before any early return when cloud == null
or vaultId == null, and mirror the same defensive unSuspendLock call in the
symmetric uploadFolder/uploadFiles code path (and where
uploadCheckpointDao.deleteByVaultId(...) is used) so the lock is always resumed
on all failure/exits (use explicit if-checks that call unSuspendLock() then
return).
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/TextEditorPresenter.kt`:
- Around line 80-87: The temp URI created before calling
cryptomatorApp.startFileUpload (variable uri) is leaked when startFileUpload
returns false because cleanupUris is only handed to the upload service on
success; update the else branch to delete/cleanup the temp URI immediately
(e.g., call cleanupUris(listOf(uri)) or delete the temp file) so the temp
resource is removed when cryptomatorApp.startFileUpload(...) returns false;
locate the call site around startFileUpload(...) and add the cleanup call in the
else branch (referencing uri, startFileUpload, and cleanupUris).
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 668-676: The catch in parseJsonArray currently swallows
JSONException; update parseJsonArray to log the exception and offending json
before returning emptyList. Specifically, inside the catch (e: JSONException)
block in function parseJsonArray, call the existing logger (e.g., logger.error
or logger.warn) to record a clear message that the checkpoint JSON was
malformed, include the exception e and the json string, then return emptyList as
the fallback.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Line 54: elapsedTimeNotificationDelay is initialized to 0 which causes the
first updateNotification() call to be treated as "accumulate" and skipped;
change the logic so the first per-file progress is posted immediately:
initialize elapsedTimeNotificationDelay to -1 (or leave as 0) and modify
updateNotification() to treat a non-positive elapsedTimeNotificationDelay as
"not set" (post the notification immediately and then set
elapsedTimeNotificationDelay = System.currentTimeMillis()); apply the same fix
to the other updateNotification-like block that uses
elapsedTimeNotificationDelay (the duplicate at the other location).
---
Nitpick comments:
In
`@domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt`:
- Around line 59-74: The tests copyClosesStreamsOnReadError and
copyClosesStreamsOnWriteError currently swallow IOExceptions and therefore don't
assert that StreamHelper.copy actually propagates errors; replace the bare
try/catch with an assertion that an IOException is thrown (e.g., use
assertThrows/Assertions.assertThrows for IOException while calling
StreamHelper.copy(input, output)), and retain the verify(input).close() and
verify(output).close() assertions so you both confirm the exception is
propagated and the streams are closed.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1158-1160: The toJsonArray implementation in
BrowseFilesPresenter.toJsonArray is duplicated elsewhere; extract it into a
single shared top-level utility function (e.g., fun toUploadJsonArray(items:
Iterable<String>): String) placed in a common util file, implement the same
escaping logic there, then replace calls to BrowseFilesPresenter.toJsonArray and
the equivalent implementations in UploadService.java and CryptomatorApp.kt to
call toUploadJsonArray(items); ensure the signature uses Iterable<String> for
broad compatibility and keep the exact escaping behavior (replace "\"" with
"\\\"") so callers' behavior remains unchanged.
In
`@presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java`:
- Around line 129-160: The test class lacks coverage for new VaultListPresenter
behavior: add unit tests that exercise VaultListPresenter.handleFolderResume and
the upload-checkpoint detection triggered on vault unlock by stubbing and
verifying the new mocks (uploadCheckpointDao, vaultRepository, cloudRepository,
contentResolverUtil). For handleFolderResume add one test for the happy path
where uploadCheckpointDao returns a checkpoint, verify the presenter shows the
resume-or-cancel dialog and that folder-structure rebuilding methods on
cloudRepository/vaultRepository/contentResolverUtil are invoked, and another
test where uploadCheckpointDao returns no checkpoint to verify the early exit
(no dialog, no rebuild). For the unlock flow add a test that stubs
uploadCheckpointDao to return a checkpoint when the presenter unlocks a vault
(simulate the unlock call), assert the dialog prompt and subsequent resume
initiation, and a negative case where no checkpoint is found; use
when(...).thenReturn(...) and verify(...) against the presenter methods and the
mocks added in setup.
domain/src/test/java/org/cryptomator/domain/usecases/cloud/StreamHelperTest.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/TextEditorPresenter.kt
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 183-186: The destroy logic must ensure
cryptomatorApp.unSuspendLock() is always called even if
folderStructureDisposable is disposed while buildUploadFolderStructure is
executing; update destroyed() to detect an active folderStructureDisposable and
either attach a doFinally()/finally handler to the Rx chain started by
buildUploadFolderStructure (so unSuspendLock() runs on completion/error/dispose)
or explicitly call cryptomatorApp.unSuspendLock() when disposing
folderStructureDisposable; reference folderStructureDisposable,
buildUploadFolderStructure, destroyed(), and cryptomatorApp.unSuspendLock() to
locate where to add the doFinally or explicit unSuspendLock() call.
- Around line 1142-1156: The DAO calls (uploadCheckpointDao.insertOrReplace in
saveCheckpoint and uploadCheckpointDao.deleteByVaultId calls used in
uploadFiles, uploadChangedFile and uploadFolder) must be moved off the main
thread and protected so cryptomatorApp.unSuspendLock() always runs; replace the
direct synchronous calls with a background/RxJava pattern consistent with
selectedFolder (e.g. Single.fromCallable { /* dao op */
}.subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()) ) and
wrap the DB operation in try/catch/finally so any exception is caught and
cryptomatorApp.unSuspendLock() is invoked in finally before calling
startFileUpload/startFolderUpload; ensure the error is logged and errors
propagated to the UI flow as before.
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 614-631: handleFolderResume launches an IO Single that may
complete after navigation, causing UI messages or progress updates to target a
backgrounded activity; fix by clearing pendingNavigationAfterResume before
starting the subscription and moving navigatePendingAfterResume() into the
subscription callbacks so navigation happens only after onSuccess or onError.
Specifically, in handleFolderResume (where folderResumeDisposable is assigned
and buildUploadFolderStructure is subscribed), set pendingNavigationAfterResume
= null immediately before creating the Single, and in the subscribe() callbacks
call navigatePendingAfterResume() (or clear it first) after handling success
(including the startFolderUpload branch and calling cryptomatorApp.unSuspendLock
on failure) and after onError (after deleting the checkpoint and showing the
error), ensuring progress is always completed and no UI messages are shown to a
gone view.
- Around line 580-601: onResumeUploadConfirmed currently calls
cryptomatorApp.unSuspendLock() unconditionally and always calls
navigatePendingAfterResume() immediately, which is wrong for async folder
resumes; fix by introducing a local boolean (e.g., lockWasSuspended) in
onResumeUploadConfirmed, set it to true only after calling
cryptomatorApp.suspendLock(), and in the catch/finally only call
cryptomatorApp.unSuspendLock() if lockWasSuspended is true; additionally, for
the folder branch (handleFolderResume which uses an async Single subscription),
remove the unconditional navigatePendingAfterResume() from the outer finally and
instead invoke navigatePendingAfterResume() from the folder resume
subscription’s onSuccess and onError callbacks so navigation occurs after async
work completes, while keeping the file branch behavior unchanged.
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1145-1159: The synchronous database calls on the main thread must
be moved off the UI thread: change saveCheckpoint to perform
uploadCheckpointDao.insertOrReplace(entity) on a background dispatcher (e.g.,
use withContext(Dispatchers.IO) or launch a CoroutineScope(Dispatchers.IO)) or
make the DAO method suspend and call it from a coroutine; likewise replace all
direct uploadCheckpointDao.deleteByVaultId(vaultId) usages with
background-coroutine calls (or suspend DAO calls) so insertOrReplace and
deleteByVaultId never run synchronously on the main thread. Ensure error
handling/logging remains in place after the migration.
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 669-677: The parseJsonArray method swallows JSONException
silently; update the catch block in parseJsonArray to log the exception
(including the exception message/stacktrace and the raw json input) using the
presenter’s existing logger (or create a class-level logger if none) before
returning emptyList(), so corrupted or unparsable checkpoint data is recorded
for debugging while preserving the current return behavior.
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
presentation/src/main/java/org/cryptomator/presentation/service/KeepAliveLeaseManager.kt (1)
28-35: Minor TOCTOU between expiry check and release — acceptable for this use case.
hasActiveLease()snapshots expired leases viafilter, then releases them one by one. If a lease is concurrently renewed between the snapshot and therelease()call, the renewal is silently discarded. Given the 1-second polling interval and the typical renewal patterns (same thread context), this is unlikely in practice, but worth documenting with a brief comment.💡 Optional: add a clarifying comment
fun hasActiveLease(): Boolean { + // Snapshot-then-release: a lease renewed between filter and release() + // will still be released. Acceptable for 1-second polling granularity. val expired = leases.filter { it.isExpired } expired.forEach { lease ->🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/KeepAliveLeaseManager.kt` around lines 28 - 35, hasActiveLease() currently filters expired leases into a snapshot and then calls release() which can drop a concurrent renewal (TOCTOU); add a brief comment inside KeepAliveLeaseManager::hasActiveLease explaining this race (snapshot -> release) is benign here due to the 1s polling interval and typical same-thread renewals, so concurrent renewals may be silently discarded but are acceptable for this use case, so future maintainers understand the trade-off.presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt (1)
269-280:@VolatileoneditingLeaseis insufficient for the compound operations and may be misleading.
@Volatileguarantees memory visibility but not atomicity.acquireEditingLease()performs a non-atomic read→release→write sequence; two concurrent callers could double-release the old lease or orphan a freshly acquired one. IfacquireEditingLease/releaseEditingLeaseare always invoked on the main thread (as expected for a text-editor lifecycle),@Volatileis unnecessary noise that implies thread-safe compound access when it isn't. If multi-threaded access is ever intended,@Synchronizedor anAtomicReferenceis needed instead.♻️ Option A — UI-thread-only: remove `@Volatile`
- `@Volatile` - private var editingLease: KeepAliveLease? = null + private var editingLease: KeepAliveLease? = null♻️ Option B — multi-thread safe: use synchronized methods
- `@Volatile` - private var editingLease: KeepAliveLease? = null + private var editingLease: KeepAliveLease? = null - fun acquireEditingLease() { - editingLease?.release() - editingLease = acquireLease(LeaseReason.EDITING, 4 * 60 * 60 * 1000L, tag = "editing") - } + `@Synchronized` + fun acquireEditingLease() { + editingLease?.release() + editingLease = acquireLease(LeaseReason.EDITING, 4 * 60 * 60 * 1000L, tag = "editing") + } - fun releaseEditingLease() { - editingLease?.release() - editingLease = null - } + `@Synchronized` + fun releaseEditingLease() { + editingLease?.release() + editingLease = null + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt` around lines 269 - 280, The `@Volatile` on editingLease is misleading because acquireEditingLease does a read→release→write that isn’t atomic; remove the `@Volatile` annotation and make the compound operations thread-safe by marking the two accessors as synchronized: add `@Synchronized` to acquireEditingLease and releaseEditingLease so the read/release/assign sequence on editingLease is serialized (ensure releaseEditingLease still sets editingLease = null after releasing). This preserves current semantics while preventing double-release/orphaning if called concurrently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 435-461: (uploadFiles) Ensure releasePickerLease is always called
when uploadFiles is invoked even if uploadLocation is null: add an early
null-check for uploadLocation at the start of uploadFiles and call
releasePickerLease() then return, or rewrite the existing uploadLocation?.let {
... } usage to handle the null case by invoking releasePickerLease() before
returning; reference symbols: uploadFiles, uploadLocation, releasePickerLease,
and the caller uploadFiles(filesReadyForUpload).
- Around line 1120-1147: The uploadFolder function can skip releasing the picker
lease when uploadLocation is null because the let block is skipped; add a
defensive early-null check for uploadLocation at the start of uploadFolder (or
use an Elvis operator) that calls releasePickerLease() and returns if
uploadLocation is null, so releasePickerLease() is always invoked when
uploadFolder exits due to a missing uploadLocation; update references in
uploadFolder (uploadLocation, releasePickerLease) accordingly.
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 687-689: The catch block that currently catches JSONException and
returns emptyList() should log the exception for diagnostics; update the catch
(e: JSONException) branch in VaultListPresenter to call the class logger (e.g.,
logger or LOG) and emit an error or warning message that includes a brief
context like "Failed to parse checkpoint data" plus the exception
message/stacktrace, then continue to return emptyList() so behavior is
unchanged.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 268-282: The throttling vars cause the first
updateNotification(int) call to compute a huge elapsedTimeNotificationDelay
because startTimeNotificationDelay defaults to 0; fix updateNotification so it
initializes startTimeNotificationDelay (and zeroes elapsedTimeNotificationDelay)
on the first invocation before using it. Concretely, in updateNotification check
if startTimeNotificationDelay == 0 (or elapsedTimeNotificationDelay == 0) and if
so set startTimeNotificationDelay = System.currentTimeMillis() and
elapsedTimeNotificationDelay = 0 so the first call can post the notification
immediately; keep the existing renew() call on uploadLease and the existing
notification.update(asPercentage) path otherwise.
---
Nitpick comments:
In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt`:
- Around line 269-280: The `@Volatile` on editingLease is misleading because
acquireEditingLease does a read→release→write that isn’t atomic; remove the
`@Volatile` annotation and make the compound operations thread-safe by marking the
two accessors as synchronized: add `@Synchronized` to acquireEditingLease and
releaseEditingLease so the read/release/assign sequence on editingLease is
serialized (ensure releaseEditingLease still sets editingLease = null after
releasing). This preserves current semantics while preventing
double-release/orphaning if called concurrently.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/KeepAliveLeaseManager.kt`:
- Around line 28-35: hasActiveLease() currently filters expired leases into a
snapshot and then calls release() which can drop a concurrent renewal (TOCTOU);
add a brief comment inside KeepAliveLeaseManager::hasActiveLease explaining this
race (snapshot -> release) is benign here due to the 1s polling interval and
typical same-thread renewals, so concurrent renewals may be silently discarded
but are acceptable for this use case, so future maintainers understand the
trade-off.
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (1)
226-242:new Handler(Looper.getMainLooper())allocated on every progress/completion event.Both
createCheckpointCallback(per-file completion, line 236) andupdateNotification(per-chunk progress, line 275) allocate a newHandlereach time they fire. For uploads with many files or large file with frequent progress callbacks, this creates steady GC pressure.♻️ Proposed refactor — single reusable Handler field
+ private final Handler mainHandler = new Handler(Looper.getMainLooper()); private FileUploadedCallback createCheckpointCallback(long vaultId, Set<String> completedFiles) { Set<String> uploadedSoFar = new HashSet<>(completedFiles); return relativePath -> { ... - new Handler(Looper.getMainLooper()).post(() -> { + mainHandler.post(() -> { if (notification != null) { notification.updateFinishedFile(); } }); }; }private void updateNotification(int asPercentage) { ... if (elapsedTimeNotificationDelay > 200 && !cancelled) { - new Handler(Looper.getMainLooper()).post(() -> { + mainHandler.post(() -> { notification.update(asPercentage); ... });Also applies to: 269-283
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 226 - 242, The callback currently creates a new Handler on the main looper for every event (see createCheckpointCallback and updateNotification), causing GC churn; fix it by adding a single reusable Handler field (e.g., private final Handler mainHandler or Handler mainThreadHandler) on the UploadService class, initialize it once (new Handler(Looper.getMainLooper())) in the constructor or field initializer, and replace all per-event new Handler(Looper.getMainLooper()).post(...) calls in createCheckpointCallback and updateNotification with mainHandler.post(...); ensure thread-safety if the handler is assigned in a constructor and referenced by those methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 96-108: Race condition: a cancel can arrive after startUpload sets
workerRunning=true but before the worker thread executes cancelled = false,
losing the cancel; move the reset of the cancelled flag into the same
synchronized(queueLock) block in startUpload (inside startUpload where
workerRunning is set) so cancelled is cleared before releasing the lock, and in
processUpload add an immediate pre-check that returns if cancelled is true
(before resetting cancelled or starting work) to ensure any cancel already set
is honored; apply the same synchronized-reset + pre-check pattern to the other
location where processUpload resets cancelled.
- Around line 292-306: The cancel branch in onStartCommand (handling
isCancelUpload(intent)) performs drainAndCleanupQueue(), sets cancelled and runs
cancelCallback, and hides the notification but never stops the service; update
the cancel handling inside onStartCommand so that after performing
drainAndCleanupQueue(), setting cancelled, invoking cancelCallback, and
hideNotification() it also calls stopSelf() to terminate the service instance
when no worker was started (ensure stopSelf() is called only in this cancel path
and does not interfere with the normal worker/finally stopSelf() logic).
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Line 1094: The current buildUploadFolderStructure implementation in
UploadFolderStructureBuilder.kt is still recursive and can trigger
StackOverflowError for deep directories; replace the recursion with an iterative
traversal (e.g., use an explicit Stack or Deque to perform depth-first or a
Queue for breadth-first) inside buildUploadFolderStructure so it builds the same
folder nodes without recursive calls, keep the method signature used by
folderStructureDisposable = Single.fromCallable {
buildUploadFolderStructure(documentFile) } unchanged, and add a unit/integration
test that constructs a very deep nested folder structure to validate the
iterative implementation avoids stack overflow.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Line 54: The first progress update is skipped because
elapsedTimeNotificationDelay is initialized to 0L; change its initial value to
201L (or any value >200) so the first call to updateNotification posts
immediately, and apply the same fix for the other occurrence within the
updateNotification-related logic (referenced around lines handling
elapsedTimeNotificationDelay in updateNotification) to ensure consistent
behavior.
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 226-242: The callback currently creates a new Handler on the main
looper for every event (see createCheckpointCallback and updateNotification),
causing GC churn; fix it by adding a single reusable Handler field (e.g.,
private final Handler mainHandler or Handler mainThreadHandler) on the
UploadService class, initialize it once (new Handler(Looper.getMainLooper())) in
the constructor or field initializer, and replace all per-event new
Handler(Looper.getMainLooper()).post(...) calls in createCheckpointCallback and
updateNotification with mainHandler.post(...); ensure thread-safety if the
handler is assigned in a constructor and referenced by those methods.
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFiles.java (1)
110-122:⚠️ Potential issue | 🟡 MinorMissing cleanup on IOException in
copyDataToFile.If
StreamHelper.copy(in, out)throws an exception after the temp file is created, the temp file is not deleted. Compare withUploadFolderFiles.java(lines 157-167 in relevant snippets) which properly wraps the copy in a nested try-catch to delete the target file on failure.🛡️ Proposed fix to add cleanup
private File copyDataToFile(DataSource dataSource) { File dir = context.getCacheDir(); try { File target = createTempFile("upload", "tmp", dir); - InputStream in = CancelAwareDataSource.wrap(dataSource, cancelledFlag).open(context); - OutputStream out = new FileOutputStream(target); - StreamHelper.copy(in, out); - dataSource.modifiedDate(context).ifPresent(value -> target.setLastModified(value.getTime())); - return target; + try { + InputStream in = CancelAwareDataSource.wrap(dataSource, cancelledFlag).open(context); + OutputStream out = new FileOutputStream(target); + StreamHelper.copy(in, out); + dataSource.modifiedDate(context).ifPresent(value -> target.setLastModified(value.getTime())); + return target; + } catch (IOException e) { + target.delete(); + throw e; + } } catch (IOException e) { throw new FatalBackendException(e); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFiles.java` around lines 110 - 122, The copyDataToFile method creates a temp File (target) but does not delete it if StreamHelper.copy or subsequent operations throw an IOException; update copyDataToFile to use try-with-resources for the InputStream (CancelAwareDataSource.wrap(...).open(...)) and OutputStream (new FileOutputStream(target)) and wrap the copy in a nested try-catch so that on any IOException you close streams and delete the target file before rethrowing (wrap/convert into FatalBackendException as currently done); reference the copyDataToFile method, the created target file, CancelAwareDataSource.wrap, and StreamHelper.copy when implementing this cleanup pattern (similar to the approach used in UploadFolderFiles).
🧹 Nitpick comments (2)
domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt (1)
514-547: Consider extracting shared test utilities.The
dataSourceWithBytesandbyteshelpers are duplicated betweenUploadFileTest.kt(lines 96-129) and this file. While test isolation is valuable, extracting these to a shared test utility class would reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt` around lines 514 - 547, The helper functions dataSourceWithBytes(...) and bytes(...) are duplicated across UploadFolderFilesTest and UploadFileTest; extract them into a single shared test utility (e.g., TestDataSources or TestUtils) as reusable top-level functions or a companion object, keep the same signatures and behavior (including checks, ByteArrayInputStream, size/modifiedDate/close implementations), update both UploadFolderFilesTest and UploadFileTest to call the shared functions, and ensure visibility/imports are adjusted so tests compile without changing behavior.presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt (1)
267-278: Potential lease orphaning with concurrentacquireEditingLeasecalls.If
acquireEditingLease()is called concurrently from multiple threads, the non-atomic read-release-assign sequence oneditingLeasecould orphan a lease (one thread's newly acquired lease gets overwritten by another's). While editing operations are likely UI-thread-only, consider using a synchronized block if this could ever be called from multiple threads.♻️ Optional thread-safe version
+ private val editingLeaseLock = Any() + fun acquireEditingLease() { - editingLease?.release() - editingLease = acquireLease(LeaseReason.EDITING, 4 * 60 * 60 * 1000L, tag = "editing") + synchronized(editingLeaseLock) { + editingLease?.release() + editingLease = acquireLease(LeaseReason.EDITING, 4 * 60 * 60 * 1000L, tag = "editing") + } } fun releaseEditingLease() { - editingLease?.release() - editingLease = null + synchronized(editingLeaseLock) { + editingLease?.release() + editingLease = null + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt` around lines 267 - 278, The current non-atomic sequence in acquireEditingLease (reading editingLease, releasing it, then assigning a new lease) can orphan leases if called concurrently; make access to editingLease thread-safe by synchronizing around both acquireEditingLease and releaseEditingLease (or using a dedicated lock) so that the read-release-assign sequence and the release-and-null sequence are atomic; update methods editingLease, acquireEditingLease, releaseEditingLease (and any call-sites using acquireLease/KeepAliveLease) to use the same lock to prevent races.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadUiUpdates.kt`:
- Around line 24-31: The snapshots map is initialized non-atomically in emit
when handling UploadUiEvent.NodeCreated using snapshots.getOrPut, which can lose
concurrent inserts; replace that call with an atomic compute-if-absent using
snapshots.computeIfAbsent(event.folderKey) { ConcurrentHashMap() } and then put
the node into the returned map (keep references to UploadUiEvent.NodeCreated,
snapshots, FolderKey, and emit) so the per-folder ConcurrentHashMap is created
only once under concurrency.
---
Outside diff comments:
In `@domain/src/main/java/org/cryptomator/domain/usecases/cloud/UploadFiles.java`:
- Around line 110-122: The copyDataToFile method creates a temp File (target)
but does not delete it if StreamHelper.copy or subsequent operations throw an
IOException; update copyDataToFile to use try-with-resources for the InputStream
(CancelAwareDataSource.wrap(...).open(...)) and OutputStream (new
FileOutputStream(target)) and wrap the copy in a nested try-catch so that on any
IOException you close streams and delete the target file before rethrowing
(wrap/convert into FatalBackendException as currently done); reference the
copyDataToFile method, the created target file, CancelAwareDataSource.wrap, and
StreamHelper.copy when implementing this cleanup pattern (similar to the
approach used in UploadFolderFiles).
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 685-693: parseJsonArray currently swallows JSONException; preserve
the emptyList() fallback but add a diagnostic log in the catch block: inside
VaultListPresenter.parseJsonArray catch (e: JSONException) { ... } call the
class' existing logger (e.g. logger.warn or logger.error) to record the failed
JSON string and the exception (include the json value and e) so you can diagnose
corrupted checkpoint data without changing the return behavior.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 355-369: When onStartCommand receives a cancel intent
(isCancelUpload(intent)), ensure the service stops if no worker is active: after
calling drainAndCleanupQueue(), setting cancelled = true, running cancelCallback
and hideNotification(), add a check of the workerRunning flag and call
stopSelf() when workerRunning == false so the service doesn't linger; update the
onStartCommand method (referencing onStartCommand, isCancelUpload,
drainAndCleanupQueue, cancelCallback, cancelled, hideNotification,
workerRunning) to perform that stopSelf() call in the cancel branch.
- Around line 105-117: The race occurs between startUpload and processUpload:
set workerRunning = true inside the queueLock but do not reset cancelled after
that window; to fix, inside the synchronized(queueLock) block in startUpload
(the method creating QueuedUpload and touching workerRunning), also reset the
shared cancelled flag to false and initialize cancelCallback appropriately so a
cancel arriving before processUpload cannot be lost; additionally add an early
pre-check at the start of processUpload that checks the cancelled flag (and
aborts/returns if true) before clearing/resetting cancelCallback to avoid
overwriting a cancellation that arrived while the worker was being started.
- Around line 61-62: The first progress notification is delayed because
elapsedTimeNotificationDelay is initialized to 0L so updateNotification() takes
the accumulation branch; change the initialization so the first call triggers an
immediate post by initializing elapsedTimeNotificationDelay to the notification
threshold (e.g., set elapsedTimeNotificationDelay = notificationDelayMillis or
200L) or adjust updateNotification() to treat a zero value as "send immediately"
(use the field names startTimeNotificationDelay and elapsedTimeNotificationDelay
and updateNotification() to implement the chosen fix).
---
Nitpick comments:
In
`@domain/src/test/java/org/cryptomator/domain/usecases/cloud/UploadFolderFilesTest.kt`:
- Around line 514-547: The helper functions dataSourceWithBytes(...) and
bytes(...) are duplicated across UploadFolderFilesTest and UploadFileTest;
extract them into a single shared test utility (e.g., TestDataSources or
TestUtils) as reusable top-level functions or a companion object, keep the same
signatures and behavior (including checks, ByteArrayInputStream,
size/modifiedDate/close implementations), update both UploadFolderFilesTest and
UploadFileTest to call the shared functions, and ensure visibility/imports are
adjusted so tests compile without changing behavior.
In `@presentation/src/main/java/org/cryptomator/presentation/CryptomatorApp.kt`:
- Around line 267-278: The current non-atomic sequence in acquireEditingLease
(reading editingLease, releasing it, then assigning a new lease) can orphan
leases if called concurrently; make access to editingLease thread-safe by
synchronizing around both acquireEditingLease and releaseEditingLease (or using
a dedicated lock) so that the read-release-assign sequence and the
release-and-null sequence are atomic; update methods editingLease,
acquireEditingLease, releaseEditingLease (and any call-sites using
acquireLease/KeepAliveLease) to use the same lock to prevent races.
presentation/src/main/java/org/cryptomator/presentation/service/UploadUiUpdates.kt
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (3)
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (3)
52-57:notificationfield is written on the worker thread but read on the main thread without a visibility guarantee.
notification(Line 52) is notvolatile. It is assigned on the worker thread inprocessUpload(Line 159) and later read on the main thread via theHandlerpost increateCheckpointCallback(Line 262). While Android'sHandler.post()mechanism typically provides a happens-before through its internal message queue, the Java Memory Model doesn't formally guarantee visibility of a non-volatile field written on Thread A and read on Thread B without explicit synchronization.In practice this is unlikely to cause issues on current Android runtimes, but marking it
volatilewould be a low-cost correctness improvement.Proposed fix
- private UploadNotification notification; + private volatile UploadNotification notification;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 52 - 57, The field notification is written on the worker thread in processUpload and later read on the main thread in the Runnable posted from createCheckpointCallback, so add a visibility guarantee by declaring the UploadService.notification field as volatile; update the declaration of notification to be volatile to ensure writes in processUpload are visible to the Handler.post() read without requiring further synchronization.
245-267: Checkpoint callback seeded withcompletedFiles— addresses prior concern.
uploadedSoFaris correctly initialized from the existingcompletedFilesset so multi-resume scenarios preserve history.One minor note: a new
Handler(Looper.getMainLooper())is allocated on every file-uploaded callback invocation (Line 261). Consider caching a singleHandlerinstance as a field to reduce allocation churn during large uploads.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 245 - 267, The callback creates a new Handler on every invocation which causes allocation churn; modify UploadService to reuse a single main-thread Handler instead of creating one per uploaded file: add a private final Handler mainHandler (initialized once in the UploadService constructor or as a lazy field using new Handler(Looper.getMainLooper())), replace new Handler(Looper.getMainLooper()).post(...) in createCheckpointCallback with mainHandler.post(...), keeping the rest of the logic (uploadedSoFar, emitFileCreated, notification.updateFinishedFile) unchanged.
409-437: Binderinit()has no guard against use-before-init.
getUploadCheckpointDao()(Line 434) and thestartFileUpload/startFolderUploaddelegations will NPE if called beforeinit(). Consider adding a null-check or a booleaninitializedflag, or documenting the required call order.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 409 - 437, The Binder.init() method can leave the Binder in an uninitialized state causing NPEs when getUploadCheckpointDao(), startFileUpload(...) or startFolderUpload(...) delegate to UploadService.this.*; add a private boolean initialized flag (or check for non-null UploadService.this.uploadCheckpointDao/cloudContentRepository) in the Binder, set initialized = true at the end of init(), and at the start of getUploadCheckpointDao(), startFileUpload(...) and startFolderUpload(...) validate initialized (or the required fields) and throw a clear IllegalStateException (or return a controlled error) if not initialized to avoid silent NPEs and document the required call order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 685-691: The parseJsonArray function currently swallows
JSONException and returns emptyList(), making diagnostics impossible; update the
catch block for JSONException to log the failure (including the exception
message/stack) before returning emptyList(), using the presenter class' logger
(e.g., logger.error or similar) so callers can trace corrupted checkpoint data;
keep behavior of returning emptyList() but ensure the log includes the raw json
string and the caught exception for context.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 335-349: The first progress update is dropped because
elapsedTimeNotificationDelay starts at 0 so the initial call hits the else
branch; modify updateNotification so it forces the first update when
startTimeNotificationDelay == 0 (or when elapsedTimeNotificationDelay > 200) by
changing the condition to: if (startTimeNotificationDelay == 0 ||
elapsedTimeNotificationDelay > 200) and then post the notification
(notification.update(...)) and set startTimeNotificationDelay =
System.currentTimeMillis() and elapsedTimeNotificationDelay = 0; keep the
existing KeepAliveLease renew and cancelled check intact.
- Around line 358-377: The onStartCommand cancel branch already prevents the
service from lingering by draining the queue and calling stopSelf() only inside
the synchronized (queueLock) check when workerRunning is false; no code change
required—keep the logic in onStartCommand, retain calls to
drainAndCleanupQueue(), the cancelled flag, cancelCallback.run(),
hideNotification(), and the synchronized (queueLock) { if (!workerRunning)
stopSelf(); } behavior as-is to address the prior linger concern.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadUiUpdates.kt`:
- Around line 24-36: No code change is required: the atomic initialization is
correctly handled by snapshots.computeIfAbsent in emit and the processor is made
thread-safe via toSerialized(); keep the logic in emit (references: snapshots,
computeIfAbsent, processor, emit, UploadUiEvent.NodeCreated,
UploadUiEvent.UploadFinished) as-is and optionally remove the
redundant/duplicate review marker "[duplicate_comment]" from the review metadata
if desired.
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 52-57: The field notification is written on the worker thread in
processUpload and later read on the main thread in the Runnable posted from
createCheckpointCallback, so add a visibility guarantee by declaring the
UploadService.notification field as volatile; update the declaration of
notification to be volatile to ensure writes in processUpload are visible to the
Handler.post() read without requiring further synchronization.
- Around line 245-267: The callback creates a new Handler on every invocation
which causes allocation churn; modify UploadService to reuse a single
main-thread Handler instead of creating one per uploaded file: add a private
final Handler mainHandler (initialized once in the UploadService constructor or
as a lazy field using new Handler(Looper.getMainLooper())), replace new
Handler(Looper.getMainLooper()).post(...) in createCheckpointCallback with
mainHandler.post(...), keeping the rest of the logic (uploadedSoFar,
emitFileCreated, notification.updateFinishedFile) unchanged.
- Around line 409-437: The Binder.init() method can leave the Binder in an
uninitialized state causing NPEs when getUploadCheckpointDao(),
startFileUpload(...) or startFolderUpload(...) delegate to UploadService.this.*;
add a private boolean initialized flag (or check for non-null
UploadService.this.uploadCheckpointDao/cloudContentRepository) in the Binder,
set initialized = true at the end of init(), and at the start of
getUploadCheckpointDao(), startFileUpload(...) and startFolderUpload(...)
validate initialized (or the required fields) and throw a clear
IllegalStateException (or return a controlled error) if not initialized to avoid
silent NPEs and document the required call order.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
presentation/src/main/res/layout/item_vault.xml (1)
60-70: Addtools:srcfor layout preview andandroid:contentDescriptionfor accessibility.Two minor gaps in the new
upload_indicatorview:
- No drawable source in XML — the view is completely blank in the layout editor (and in runtime until a drawable is set programmatically). Adding a
tools:srcmakes the file self-documenting and keeps the preview usable.- No
android:contentDescription— TalkBack will announce nothing meaningful when the indicator is made visible. Even a concise string resource (e.g.@string/upload_in_progress) helps screen-reader users.🎨 Proposed additions
<ImageView android:id="@+id/upload_indicator" android:layout_width="59dp" android:layout_height="59dp" android:layout_centerVertical="true" android:layout_toStartOf="@id/settings" android:paddingStart="14dp" android:paddingTop="10dp" android:paddingEnd="14dp" android:paddingBottom="10dp" + android:contentDescription="@string/upload_in_progress" + tools:src="@drawable/ic_upload_indicator" android:visibility="gone" />(
tools:requiresxmlns:tools="http://schemas.android.com/tools"in the root element if not already present.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/res/layout/item_vault.xml` around lines 60 - 70, The ImageView with id upload_indicator is missing a preview drawable and an accessibility label; add a tools:src attribute (e.g. tools:src="@drawable/your_preview_drawable") so the layout editor shows the indicator and add android:contentDescription="@string/upload_in_progress" (create that string resource) so TalkBack announces it when visible; ensure the layout root declares xmlns:tools="http://schemas.android.com/tools" if not already present.presentation/src/main/res/animator/upload_fill.xml (1)
6-9: Considerlinear_interpolatorfor a more uniform looping fill effectNo interpolator is specified, so the animation defaults to
AccelerateDecelerateInterpolator. Because this is a continuously restarting activity indicator (not a one-shot transition), a linear interpolator produces an even, constant-rate fill that reads more naturally as "ongoing progress."♻️ Proposed addition
android:duration="1200" + android:interpolator="@android:anim/linear_interpolator" android:repeatCount="infinite"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/res/animator/upload_fill.xml` around lines 6 - 9, The animator in upload_fill.xml currently uses the default AccelerateDecelerateInterpolator, causing uneven speed on a looping progress animation; update the animator element (e.g., the ObjectAnimator or AnimatorSet defined in upload_fill.xml) to use a linear interpolator by adding android:interpolator="@android:interpolator/linear" to the element so the repeated fill runs at a constant rate for a smoother ongoing progress effect.presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (1)
251-273: Consider reusing the main-thread Handler instead of allocating one per callback invocation.Line 267 creates a
new Handler(Looper.getMainLooper())on every file-uploaded callback. While functionally correct, this generates unnecessary short-lived allocations on the upload hot path. A singleHandlerfield would be cleaner.♻️ Proposed refactor
Add a field at the class level:
private final Handler mainHandler = new Handler(Looper.getMainLooper());Then replace line 267:
- new Handler(Looper.getMainLooper()).post(() -> { + mainHandler.post(() -> {Apply the same to
updateNotification(line 359).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 251 - 273, The callback currently allocates a new Handler(Looper.getMainLooper()) on every invocation in createCheckpointCallback (and similarly in updateNotification), causing unnecessary short-lived allocations; add a class-level field (e.g., private final Handler mainHandler = new Handler(Looper.getMainLooper())) and replace the per-call new Handler(...) uses with mainHandler.post(...) so both createCheckpointCallback and updateNotification reuse the same main-thread Handler instance.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 689-697: The resumed file upload is hardcoding
thatIsReplacing(true) which overrides the user's original choice; instead, use
the checkpoint's stored preference. Update the UploadFile builder in the
uploadFiles mapping (the code that calls
UploadFile.anUploadFile().withFileName(...).withDataSource(...).thatIsReplacing(true).build())
to pass checkpoint.isReplacing rather than true so handleFileResume respects the
same checkpoint.isReplacing flag used by handleFolderResume.
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 340-351: deleteTempFiles currently constructs Files with new
File(uri.getPath()) which only works for file:// URIs and will silently fail for
content:// or other schemes; update deleteTempFiles to validate the Uri scheme
before converting (e.g., check uri.getScheme() equals "file" or use Uri.fromFile
callers), and if the scheme is unsupported either log a warning including the
uri or throw/skip accordingly; also add a brief Javadoc to deleteTempFiles
documenting it expects file:// URIs and reference the places that create these
temp URIs (BrowseFilesPresenter and TextEditorPresenter using Uri.fromFile) so
future callers know the constraint.
In `@presentation/src/main/res/animator/upload_fill.xml`:
- Line 7: Replace the animator attribute value android:repeatCount="infinite"
with the documented integer form android:repeatCount="-1"; locate the attribute
in upload_fill.xml (the android:repeatCount declaration) and change the enum
name "infinite" to the string "-1" so the animator XML uses the official
documented value.
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 251-273: The callback currently allocates a new
Handler(Looper.getMainLooper()) on every invocation in createCheckpointCallback
(and similarly in updateNotification), causing unnecessary short-lived
allocations; add a class-level field (e.g., private final Handler mainHandler =
new Handler(Looper.getMainLooper())) and replace the per-call new Handler(...)
uses with mainHandler.post(...) so both createCheckpointCallback and
updateNotification reuse the same main-thread Handler instance.
In `@presentation/src/main/res/animator/upload_fill.xml`:
- Around line 6-9: The animator in upload_fill.xml currently uses the default
AccelerateDecelerateInterpolator, causing uneven speed on a looping progress
animation; update the animator element (e.g., the ObjectAnimator or AnimatorSet
defined in upload_fill.xml) to use a linear interpolator by adding
android:interpolator="@android:interpolator/linear" to the element so the
repeated fill runs at a constant rate for a smoother ongoing progress effect.
In `@presentation/src/main/res/layout/item_vault.xml`:
- Around line 60-70: The ImageView with id upload_indicator is missing a preview
drawable and an accessibility label; add a tools:src attribute (e.g.
tools:src="@drawable/your_preview_drawable") so the layout editor shows the
indicator and add android:contentDescription="@string/upload_in_progress"
(create that string resource) so TalkBack announces it when visible; ensure the
layout root declares xmlns:tools="http://schemas.android.com/tools" if not
already present.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
data/src/main/java/org/cryptomator/data/db/UploadCheckpointDao.javapresentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.ktpresentation/src/main/java/org/cryptomator/presentation/service/UploadService.javapresentation/src/main/java/org/cryptomator/presentation/service/UploadUiUpdates.ktpresentation/src/main/java/org/cryptomator/presentation/ui/activity/VaultListActivity.ktpresentation/src/main/java/org/cryptomator/presentation/ui/activity/view/VaultListView.ktpresentation/src/main/java/org/cryptomator/presentation/ui/adapter/VaultsAdapter.ktpresentation/src/main/java/org/cryptomator/presentation/ui/fragment/VaultListFragment.ktpresentation/src/main/res/animator/upload_fill.xmlpresentation/src/main/res/drawable-night/ic_upload_arrow.xmlpresentation/src/main/res/drawable-night/ic_upload_arrow_fill.xmlpresentation/src/main/res/drawable/ic_upload_arrow.xmlpresentation/src/main/res/drawable/ic_upload_arrow_animated.xmlpresentation/src/main/res/drawable/ic_upload_arrow_fill.xmlpresentation/src/main/res/layout/item_vault.xmlpresentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java
✅ Files skipped from review due to trivial changes (3)
- presentation/src/main/res/drawable/ic_upload_arrow.xml
- presentation/src/main/res/drawable/ic_upload_arrow_fill.xml
- presentation/src/main/res/drawable/ic_upload_arrow_animated.xml
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt
Show resolved
Hide resolved
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
♻️ Duplicate comments (1)
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt (1)
693-701:⚠️ Potential issue | 🟡 Minor
thatIsReplacing(true)overrides the user's original replace choice on file-upload resume.
handleFolderResumecorrectly readscheckpoint.isReplacing(line 673), buthandleFileResumehardcodestrue. A user who originally chose not to replace will have skipped files forcibly replaced on resume, since skipped files are absent fromcompletedFiles.💡 Proposed fix
UploadFile.anUploadFile() .withFileName(fileName) .withDataSource(UriBasedDataSource.from(uri)) - .thatIsReplacing(true) + .thatIsReplacing(checkpoint.isReplacing) .build()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 693 - 701, In handleFileResume replace the hardcoded thatIsReplacing(true) with the original user's choice by using the checkpoint flag (checkpoint.isReplacing) so UploadFile.anUploadFile()... .thatIsReplacing(...) reflects checkpoint.isReplacing instead of always true; locate the uploadFiles construction in handleFileResume and change the thatIsReplacing argument to checkpoint.isReplacing to preserve the user's replace/skip preference (consistent with handleFolderResume which reads checkpoint.isReplacing).
🧹 Nitpick comments (2)
presentation/src/main/res/values/strings.xml (1)
374-376:dialog_replace_folder_positive_buttonduplicates an existing "Replace" string — minor i18n deduplication opportunity.💡 Optional refactor
- <string name="dialog_replace_folder_positive_button">Replace</string> + <string name="dialog_replace_folder_positive_button" translatable="false">@string/dialog_existing_file_positive_button</string>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/res/values/strings.xml` around lines 374 - 376, The string resource dialog_replace_folder_positive_button duplicates an existing "Replace" label; instead of introducing a new literal, reference or reuse the existing "Replace" string resource (e.g., a resource named "replace" or similar) to avoid i18n duplication—update dialog_replace_folder_positive_button to point to the existing string resource ID (or remove it and update callers to use the shared resource) so translators only see one entry for "Replace".presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java (1)
270-290: Consider adding unit tests for the new resume flow methods.
checkForInterruptedUpload,onResumeUploadConfirmed, andonResumeUploadDeclinedare untested. Key scenarios worth covering:
checkForInterruptedUploadwith a checkpoint → dialog shown +pendingNavigationAfterResumesetcheckForInterruptedUploadwith no checkpoint → direct navigationonResumeUploadDeclined→ checkpoint deleted + navigation proceedsonResumeUploadConfirmedwith null checkpoint → navigation proceeds without crash🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java` around lines 270 - 290, Add unit tests in VaultListPresenterTest to cover the resume flow: write a test for checkForInterruptedUpload when uploadCheckpointManager.findCheckpointForVault(...) returns a checkpoint that verifies vaultListView.showResumeUploadDialog(...) is called and that presenter.pendingNavigationAfterResume is set to the expected NavigationAction; write a test for checkForInterruptedUpload when findCheckpointForVault returns null that verifies the presenter immediately calls the navigation method (e.g., navigateToVault or equivalent) and does not show a dialog; write a test for onResumeUploadDeclined that stubs a checkpoint, invokes onResumeUploadDeclined, verifies uploadCheckpointManager.deleteCheckpoint(...) is called with that checkpoint and that navigation proceeds; and write a test for onResumeUploadConfirmed that passes a null checkpoint and verifies navigation proceeds without throwing. Reference methods: checkForInterruptedUpload, onResumeUploadConfirmed, onResumeUploadDeclined, and collaborators uploadCheckpointManager, vaultListView, and the presenter's pendingNavigationAfterResume field to locate code to stub and verify.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 693-701: In handleFileResume replace the hardcoded
thatIsReplacing(true) with the original user's choice by using the checkpoint
flag (checkpoint.isReplacing) so UploadFile.anUploadFile()...
.thatIsReplacing(...) reflects checkpoint.isReplacing instead of always true;
locate the uploadFiles construction in handleFileResume and change the
thatIsReplacing argument to checkpoint.isReplacing to preserve the user's
replace/skip preference (consistent with handleFolderResume which reads
checkpoint.isReplacing).
---
Nitpick comments:
In `@presentation/src/main/res/values/strings.xml`:
- Around line 374-376: The string resource dialog_replace_folder_positive_button
duplicates an existing "Replace" label; instead of introducing a new literal,
reference or reuse the existing "Replace" string resource (e.g., a resource
named "replace" or similar) to avoid i18n duplication—update
dialog_replace_folder_positive_button to point to the existing string resource
ID (or remove it and update callers to use the shared resource) so translators
only see one entry for "Replace".
In
`@presentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java`:
- Around line 270-290: Add unit tests in VaultListPresenterTest to cover the
resume flow: write a test for checkForInterruptedUpload when
uploadCheckpointManager.findCheckpointForVault(...) returns a checkpoint that
verifies vaultListView.showResumeUploadDialog(...) is called and that
presenter.pendingNavigationAfterResume is set to the expected NavigationAction;
write a test for checkForInterruptedUpload when findCheckpointForVault returns
null that verifies the presenter immediately calls the navigation method (e.g.,
navigateToVault or equivalent) and does not show a dialog; write a test for
onResumeUploadDeclined that stubs a checkpoint, invokes onResumeUploadDeclined,
verifies uploadCheckpointManager.deleteCheckpoint(...) is called with that
checkpoint and that navigation proceeds; and write a test for
onResumeUploadConfirmed that passes a null checkpoint and verifies navigation
proceeds without throwing. Reference methods: checkForInterruptedUpload,
onResumeUploadConfirmed, onResumeUploadDeclined, and collaborators
uploadCheckpointManager, vaultListView, and the presenter's
pendingNavigationAfterResume field to locate code to stub and verify.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.ktpresentation/src/main/res/values/strings.xmlpresentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt (2)
738-746: Suppress the detektSwallowedExceptionwarning to document the intentional decision.The author previously declined adding a log here because checkpoint JSON content could leak plaintext file paths to logcat in a security-sensitive app — a valid security call. Consider annotating the catch to silence the static analysis tool and make the intent explicit.
♻️ Proposed annotation
+ `@Suppress`("SwallowedException") } catch (e: JSONException) { emptyList() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 738 - 746, The catch block in parseJsonArray swallows JSONException without annotation; add the Detekt suppression to document the intentional decision to avoid logging sensitive JSON (suppress "SwallowedException" on the catch or the function) so static analysis is satisfied. Update the parseJsonArray function declaration or the catch clause to include the appropriate `@Suppress`("SwallowedException") annotation and keep the current behavior of returning emptyList() on error, referencing the parseJsonArray method and its JSONException catch.
698-708:⚠️ Potential issue | 🟡 Minor
thatIsReplacing(true)should usecheckpoint.isReplacingto match the folder-resume behavior.
handleFolderResumecorrectly readscheckpoint.isReplacing(line 684) to preserve the user's original replace/skip decision.handleFileResumeunconditionally overrides withtrue, meaning pending files from an originally non-replacing upload will silently overwrite existing vault files on resume.🔧 Proposed fix
- .thatIsReplacing(true) + .thatIsReplacing(checkpoint.isReplacing)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt` around lines 698 - 708, In handleFileResume, the UploadFile builder unconditionally sets .thatIsReplacing(true) which overrides the original user decision; change it to use the checkpoint flag by setting .thatIsReplacing(checkpoint.isReplacing) on the UploadFile built via UploadFile.anUploadFile() so file-resume preserves the original replace/skip behavior (same approach as handleFolderResume which reads checkpoint.isReplacing).
🧹 Nitpick comments (1)
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (1)
303-308: Allocatingnew Handler(Looper.getMainLooper())on every file-upload callback is wasteful.For uploads with large file counts, this creates one
Handlerobject per file. Promote it to a service-level field.♻️ Proposed refactor
Add field:
+private final Handler mainHandler = new Handler(Looper.getMainLooper());Replace per-invocation allocation:
- new Handler(Looper.getMainLooper()).post(() -> { + mainHandler.post(() -> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 303 - 308, The code creates a new Handler via new Handler(Looper.getMainLooper()) inside the per-file upload callback, which wastes objects for large uploads; add a single service-level Handler field (e.g., private final Handler mainHandler) initialized once (using Looper.getMainLooper()) in the UploadService constructor or field declaration, and replace the per-invocation new Handler(Looper.getMainLooper()).post(...) with mainHandler.post(...) where notification.updateFinishedFile() is called so the same Handler is reused across callbacks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 183-185: The early-return when the global flag cancelled is true
inside processUpload skips calling emitVaultFinished, leaving the vault marked
active; update the cancelled check path in processUpload (the block that
currently returns false after "if (cancelled) { return false; }") to call
emitVaultFinished(vault) before returning so it mirrors the
CancellationException handler; ensure you reference the same vault identifier
used by emitVaultActive in startUpload so UploadUiUpdates is correctly cleared
on global cancel.
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.kt`:
- Around line 738-746: The catch block in parseJsonArray swallows JSONException
without annotation; add the Detekt suppression to document the intentional
decision to avoid logging sensitive JSON (suppress "SwallowedException" on the
catch or the function) so static analysis is satisfied. Update the
parseJsonArray function declaration or the catch clause to include the
appropriate `@Suppress`("SwallowedException") annotation and keep the current
behavior of returning emptyList() on error, referencing the parseJsonArray
method and its JSONException catch.
- Around line 698-708: In handleFileResume, the UploadFile builder
unconditionally sets .thatIsReplacing(true) which overrides the original user
decision; change it to use the checkpoint flag by setting
.thatIsReplacing(checkpoint.isReplacing) on the UploadFile built via
UploadFile.anUploadFile() so file-resume preserves the original replace/skip
behavior (same approach as handleFolderResume which reads
checkpoint.isReplacing).
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 303-308: The code creates a new Handler via new
Handler(Looper.getMainLooper()) inside the per-file upload callback, which
wastes objects for large uploads; add a single service-level Handler field
(e.g., private final Handler mainHandler) initialized once (using
Looper.getMainLooper()) in the UploadService constructor or field declaration,
and replace the per-invocation new Handler(Looper.getMainLooper()).post(...)
with mainHandler.post(...) where notification.updateFinishedFile() is called so
the same Handler is reused across callbacks.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
presentation/src/main/java/org/cryptomator/presentation/presenter/VaultListPresenter.ktpresentation/src/main/java/org/cryptomator/presentation/service/UploadService.javapresentation/src/main/java/org/cryptomator/presentation/ui/activity/VaultListActivity.ktpresentation/src/main/java/org/cryptomator/presentation/ui/dialog/CancelUploadAndLockDialog.ktpresentation/src/main/res/layout/dialog_cancel_upload_and_lock.xmlpresentation/src/main/res/values/strings.xmlpresentation/src/test/java/org/cryptomator/presentation/presenter/VaultListPresenterTest.java
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java (2)
182-185:⚠️ Potential issue | 🟡 MinorEmit vault-finished on the early cancelled path.
Line 183 returns without calling
emitVaultFinished, so a global cancel can leave the vault marked active.🧩 Suggested fix
if (cancelled) { + emitVaultFinished(upload.vaultId); return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 182 - 185, The early-return on the cancelled path leaves the vault active because emitVaultFinished is not called; update the cancelled branch in UploadService (the try block that checks the cancelled boolean) to call emitVaultFinished(...) with the appropriate vault/session parameters before returning false (or move vault-finalization into a finally block that always calls emitVaultFinished), ensuring emitVaultFinished is invoked on both normal and early-cancel exit paths.
377-387:⚠️ Potential issue | 🟡 MinorGuard
deleteTempFilesagainst non-file URIs.Line 380 assumes
file://URIs (uri.getPath()), which can silently fail or NPE forcontent://. Add a scheme check (or document the constraint).🛡️ Suggested guard
private void deleteTempFiles(List<Uri> uris) { for (Uri uri : uris) { try { + if (!"file".equals(uri.getScheme())) { + Timber.tag("UploadService").w("Skipping non-file cleanup URI: %s", uri); + continue; + } File file = new File(uri.getPath()); if (file.delete()) { Timber.tag("UploadService").d("Cleaned up temp file: %s", uri); } } catch (Exception e) {#!/bin/bash # Verify all cleanupUris sources are file:// and identify any content:// usage. rg -n "cleanupUris|startFileUpload|setCleanupUri|Uri\.fromFile|deleteTempFiles" --type=kt --type=java -C 3🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java` around lines 377 - 387, The deleteTempFiles(List<Uri> uris) method assumes every Uri is a file:// Uri and calls new File(uri.getPath()), which can fail for content:// URIs; update deleteTempFiles to first check uri.getScheme() and only attempt File deletion when scheme equals "file" (or uri.getPath() is non-null and scheme is null for legacy file URIs); otherwise skip and log a warning or resolve and delete via ContentResolver if you want to support content://; also update any callers (e.g., places that populate cleanupUris, startFileUpload, setCleanupUri) to ensure they supply file:// Uris or document the constraint.
🧹 Nitpick comments (1)
presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt (1)
1202-1211: Consider guarding against stream termination on error.If the upstream
eventsForobservable terminates with an error, the subscription is cancelled andview?.addOrUpdateCloudNodewill stop being called for the remainder of the session (until the nextgetCloudListcall re-subscribes). For an event-bus-style observable this is unlikely in practice, butonErrorResumeNextwould make it resilient.♻️ Proposed defensive change
uploadEventsDisposable = uploadUiUpdates.eventsFor(folderKey) .observeOn(AndroidSchedulers.mainThread()) + .onErrorResumeNext { e: Throwable -> + Timber.tag("BrowseFilesPresenter").e(e, "Upload event stream error, stopping updates for folder") + io.reactivex.Observable.empty() + } .subscribe({ event -> when (event) { is UploadUiEvent.NodeCreated -> view?.addOrUpdateCloudNode(event.node) is UploadUiEvent.UploadFinished -> uploadUiUpdates.clear(event.folderKey) } - }, { e -> - Timber.tag("BrowseFilesPresenter").e(e, "Upload event stream error") })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt` around lines 1202 - 1211, The subscription to uploadUiUpdates.eventsFor(folderKey) can terminate on error and stop further UI updates; modify the chain on uploadEventsDisposable so that uploadUiUpdates.eventsFor(folderKey) is followed by onErrorResumeNext(Observable.empty()) (or equivalent Rx method) before observeOn/subscribe, and log the error inside the onErrorResumeNext handler (using Timber.tag("BrowseFilesPresenter").e(...)) so the stream is resilient and view?.addOrUpdateCloudNode continues to receive events even after an upstream error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1056-1061: The catch block for catch (e: SecurityException) around
context().contentResolver.takePersistableUriPermission(uri,
Intent.FLAG_GRANT_READ_URI_PERMISSION) is swallowing the exception; update the
Timber logging call (currently Timber.tag("BrowseFilesPresenter").d("Could not
persist URI permission: %s", uri)) to include the caught exception e (e.g. pass
e as the throwable or switch to Timber.e) so the stack trace is logged alongside
the URI and message.
---
Duplicate comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/service/UploadService.java`:
- Around line 182-185: The early-return on the cancelled path leaves the vault
active because emitVaultFinished is not called; update the cancelled branch in
UploadService (the try block that checks the cancelled boolean) to call
emitVaultFinished(...) with the appropriate vault/session parameters before
returning false (or move vault-finalization into a finally block that always
calls emitVaultFinished), ensuring emitVaultFinished is invoked on both normal
and early-cancel exit paths.
- Around line 377-387: The deleteTempFiles(List<Uri> uris) method assumes every
Uri is a file:// Uri and calls new File(uri.getPath()), which can fail for
content:// URIs; update deleteTempFiles to first check uri.getScheme() and only
attempt File deletion when scheme equals "file" (or uri.getPath() is non-null
and scheme is null for legacy file URIs); otherwise skip and log a warning or
resolve and delete via ContentResolver if you want to support content://; also
update any callers (e.g., places that populate cleanupUris, startFileUpload,
setCleanupUri) to ensure they supply file:// Uris or document the constraint.
---
Nitpick comments:
In
`@presentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.kt`:
- Around line 1202-1211: The subscription to
uploadUiUpdates.eventsFor(folderKey) can terminate on error and stop further UI
updates; modify the chain on uploadEventsDisposable so that
uploadUiUpdates.eventsFor(folderKey) is followed by
onErrorResumeNext(Observable.empty()) (or equivalent Rx method) before
observeOn/subscribe, and log the error inside the onErrorResumeNext handler
(using Timber.tag("BrowseFilesPresenter").e(...)) so the stream is resilient and
view?.addOrUpdateCloudNode continues to receive events even after an upstream
error.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
buildsystem/dependencies.gradlepresentation/src/main/java/org/cryptomator/presentation/presenter/BrowseFilesPresenter.ktpresentation/src/main/java/org/cryptomator/presentation/service/UploadService.java
Adds folder upload support — users pick a local folder, the app traverses the directory tree and uploads everything into the vault with the folder structure preserved.
Upload flow
sequenceDiagram participant U as User participant P as BrowseFilesPresenter participant App as CryptomatorApp participant S as UploadService participant Bus as UploadUiUpdates participant VP as VaultListPresenter U->>P: Pick folder (system picker) P->>P: Build UploadFolderStructure P->>P: Save checkpoint to DB P->>App: startFolderUpload() App->>S: Binder.startFolderUpload() S->>S: Acquire UPLOAD lease (5 min TTL) loop Each file in folder tree S->>S: Upload file to vault S->>S: Renew lease on progress S->>S: Update checkpoint in DB S->>Bus: emit(NodeCreated) end Bus-->>P: NodeCreated → addOrUpdateCloudNode() Bus-->>VP: VaultUploadEvent → show indicator S->>S: Delete checkpoint S->>Bus: emit(UploadFinished) S->>S: Release leaseUpload lifecycle
Folder upload
Users select a folder via the system document picker. The directory tree is scanned in the background, vault folders are created recursively, then all files are uploaded. Both file and folder uploads run in a foreground service so they survive app backgrounding and vault locks. Progress is checkpointed per file; after vault unlock the app detects interrupted uploads and offers to resume. When a folder name already exists at the target, a replace dialog lets the user choose to overwrite, skip, or cancel — the choice is persisted so resumed uploads honour it. Also extracts duplicated stream-copy utilities into a shared StreamHelper.
Keep-alive leases
Replaces the boolean
suspendLock/unSuspendLockmechanism with time-bounded keep-alive leases. Picker flows get a 2 min lease, the upload service renews on progress (5 min TTL, 12 h hard max), editing gets 4 h. Leaked or stuck leases auto-expire, so no code path can permanently block vault auto-lock.Upload UX
Uploaded files and folders appear in the vault browser in real time via an
UploadUiUpdatesevent bus — no manual refresh needed. The vault list shows an animated upload indicator per vault, plus a resumable badge for interrupted uploads. Locking a vault with an active upload shows a Cancel & Lock confirmation dialog; cancellation is scoped per vault so locking vault A does not abort uploads for vault B.Closes #63