Compression

RSS for tag

Enable your app to provide lossless compression when saving or sharing files and data using Compression.

Posts under Compression tag

5 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

App Rejected: Non-Public Symbols _lzma_code and _lzma_end in Payload/Hogs.app/Hogs
Hello, I have integrated LZMA2 compression into my iOS app, Hogs, and successfully implemented compression. However, when attempting to upload the app for TestFlight, I encountered an error: "The app references non-public symbols in Payload/Hogs.app/Hogs: _lzma_code, _lzma_end." These functions are part of the LZMA compression library (specifically LZMA2). Here's a detailed description of the issue: What I Have Done: LZMA2 Integration: I integrated LZMA2 compression into the app and created a wrapper around the LZMA functions (_lzma_code, _lzma_end) to prevent direct references. App Build Configuration: I ensured the LZMA2 library is linked correctly with the -lzma flag in the linker settings. I wrapped the LZMA functions in custom functions (my_lzma_code, my_lzma_end) in an attempt to avoid using the non-public symbols directly. Error Message: During the app submission process, I received the following error: "The app references non-public symbols in Payload/Hogs.app/Hogs: _lzma_code, _lzma_end." Steps Taken to Resolve: Checked if any LZMA functions were exposed incorrectly. Ensured that all non-public symbols were properly encapsulated in a wrapper. Verified linker settings to ensure the proper inclusion of the LZMA2 library. Request: Could anyone provide suggestions or best practices to resolve this issue and avoid references to non-public symbols? Should I use a different method for linking LZMA2 or encapsulating these symbols? Thank You: I appreciate your help in resolving this issue so I can move forward with submitting the app for TestFlight.
1
0
80
4d
Compression LZMA2 (SWCompression)
Hi everyone, I am trying to compress my data using LZMA2 with the help of CocoaPods. Here are the steps I followed to achieve LZMA2 compression: Added the pod 'SWCompression', '~> 4.8' dependency to my Podfile and installed the pod using the terminal. When I try to compress the data using LZMA2, I am unable to do so because in SWCompression, LZMA2 compression is marked as TBD (To Be Determined). Here is the current status of SWCompression: Deflate BZip2 LZMA/LZMA2 LZ4 Decompression ✅ ✅ ✅ ✅ Compression ✅ ✅ TBD ✅ Zlib GZip XZ ZIP TAR 7-Zip Read ✅ ✅ ✅ ✅ ✅ ✅ Write ✅ ✅ TBD TBD ✅ TBD Since LZMA2 compression is still marked as TBD, is there any other way to achieve LZMA2 compression for my data in SwiftUI?
2
0
190
Nov ’24
Can't get compression_decode_buffer() to work
I've got a Data of deflate (zlib)-compressed data that decompresses properly using NSData.decompressed(), but does not decompress properly using compression_decode_buffer(). The working code looks like this: let compressedData = Data(bytesNoCopy: buf.baseAddress!, count: readCount, deallocator: .none) let dataWithoutHeader = compressedData[2...] let ucData = try (dataWithoutHeader as NSData).decompressed(using: .zlib) as Data The non-working code looks like this: let samples = try [Float](unsafeUninitializedCapacity: sampleCount) { buffer, initializedCount in print("Count: \(initializedCount)") try compressedData.withUnsafeBytes<UInt8> { (inCompressedBytes: UnsafeRawBufferPointer) -> Void in let destBufferSize = buffer.count * MemoryLayout<Float>.size let scratchBuffer = UnsafeMutableRawBufferPointer.allocate(byteCount: compression_decode_scratch_buffer_size(COMPRESSION_ZLIB), alignment: MemoryLayout<Int>.alignment) defer { scratchBuffer.deallocate() } let decompressedSize = compression_decode_buffer(buffer.baseAddress!, destBufferSize, inCompressedBytes.baseAddress!, inCompressedBytes.count, scratchBuffer.baseAddress!, COMPRESSION_ZLIB) print("Actual decompressed size: \(decompressedSize), destBufferSize: \(destBufferSize)") } initializedCount = sampleCount } It ends up printing: Actual decompressed size: 46510, destBufferSize: 1048576 (1048576 is the correct size. What data is returned does not appear to be correct.) I have tried it both with and without the first two bytes of the compressed data buffer, and with and without providing a scratch buffer.
2
0
377
Aug ’24
How to make AppleArchive + ZLIB compatible with non-Apple systems?
I very much love the performance of AppleArchive and how approachable it is, and believe it to be one of the most underrated frameworks in the SDK. In a scenario quite typical, I need to compress files and submit them to a back end, where the server handling the files is not an Apple platform. Obviously, individual files compressed with AA will not be compatible with other systems out of the box, but there are compatible compression algorithms. ZLIB is recommended for cases where cross-platform compatibility is necessary. As I understand it, AA adds additional headers to files in order to support preservation of file attributes, ownership and other data. Following the steps outlined in the docs, I've written code to compress single files. I can easily compress and decompress using AA without issue. To create a proof-of-concept, I've written some code in python using its zlib module. In order to get to the compressed data, it's necessary to handle the AA header fields. The first 64 bytes of a compressed file appear as follows: AA documentation states that ZLIB Level 5 compression is used, and comes in the form of raw DEFLATE data prefixed with two header bytes. In this case, these bytes are 78 5e, which begin at the 28th byte and appear as x^ above. My hope was that seeking to the start of the compressed data, then passing what remains to a decompressor object initialized with the correct WBITS would work. It works fantastically for files 1MB or less in size. Files which are larger only decompress the first megabyte. The decompressor object is reaching EOF, and I've tried various ways of attempting to seek to and concatenate the other blocks, but to no avail. Using the older Compression framework and the method specified here, with the same algorithm, yields different results. I can decompress files of any size using python's zlib module. My assumption is that AppleArchive is doing something differently in order to support its multithreading capabilities, perhaps even with asymmetric encoding where the blocks are not ordered. Is there a solution to this problem? If not, why would one ever use ZLIB versus the much more efficient LZFSE? I could use the older Compression API, but it is significantly slower compressing synchronously, and performance is critical with the application I am adding this feature to.
1
0
814
Apr ’24