zarr.codecs =========== .. py:module:: zarr.codecs Classes ------- .. autoapisummary:: zarr.codecs.BloscCname zarr.codecs.BloscCodec zarr.codecs.BloscShuffle zarr.codecs.BytesCodec zarr.codecs.Crc32cCodec zarr.codecs.Endian zarr.codecs.GzipCodec zarr.codecs.ShardingCodec zarr.codecs.ShardingCodecIndexLocation zarr.codecs.TransposeCodec zarr.codecs.VLenBytesCodec zarr.codecs.VLenUTF8Codec zarr.codecs.ZstdCodec Package Contents ---------------- .. py:class:: BloscCname(*args, **kwds) Bases: :py:obj:`enum.Enum` Enum for compression library used by blosc. .. !! processed by numpydoc !! .. py:attribute:: blosclz :value: 'blosclz' .. py:attribute:: lz4 :value: 'lz4' .. py:attribute:: lz4hc :value: 'lz4hc' .. py:attribute:: snappy :value: 'snappy' .. py:attribute:: zlib :value: 'zlib' .. py:attribute:: zstd :value: 'zstd' .. py:class:: BloscCodec(*, typesize: int | None = None, cname: BloscCname | str = BloscCname.zstd, clevel: int = 5, shuffle: BloscShuffle | str | None = None, blocksize: int = 0) Bases: :py:obj:`zarr.abc.codec.BytesBytesCodec` Base class for bytes-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(_input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int :abstractmethod: Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: blocksize :type: int :value: 0 .. py:attribute:: clevel :type: int :value: 5 .. py:attribute:: cname :type: BloscCname .. py:attribute:: is_fixed_size :value: False .. py:attribute:: shuffle :type: BloscShuffle | None .. py:attribute:: typesize :type: int | None .. py:class:: BloscShuffle(*args, **kwds) Bases: :py:obj:`enum.Enum` Enum for shuffle filter used by blosc. .. !! processed by numpydoc !! .. py:method:: from_int(num: int) -> BloscShuffle :classmethod: .. py:attribute:: bitshuffle :value: 'bitshuffle' .. py:attribute:: noshuffle :value: 'noshuffle' .. py:attribute:: shuffle :value: 'shuffle' .. py:class:: BytesCodec(*, endian: Endian | str | None = default_system_endian) Bases: :py:obj:`zarr.abc.codec.ArrayBytesCodec` Base class for array-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: endian :type: Endian | None .. py:attribute:: is_fixed_size :value: True .. py:class:: Crc32cCodec Bases: :py:obj:`zarr.abc.codec.BytesBytesCodec` Base class for bytes-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: is_fixed_size :value: True .. py:class:: Endian(*args, **kwds) Bases: :py:obj:`enum.Enum` Enum for endian type used by bytes codec. .. !! processed by numpydoc !! .. py:attribute:: big :value: 'big' .. py:attribute:: little :value: 'little' .. py:class:: GzipCodec(*, level: int = 5) Bases: :py:obj:`zarr.abc.codec.BytesBytesCodec` Base class for bytes-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(_input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int :abstractmethod: Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: is_fixed_size :value: False .. py:attribute:: level :type: int :value: 5 .. py:class:: ShardingCodec(*, chunk_shape: zarr.core.common.ChunkCoordsLike, codecs: collections.abc.Iterable[zarr.abc.codec.Codec | dict[str, zarr.core.common.JSON]] = (BytesCodec(), ), index_codecs: collections.abc.Iterable[zarr.abc.codec.Codec | dict[str, zarr.core.common.JSON]] = (BytesCodec(), Crc32cCodec()), index_location: ShardingCodecIndexLocation | str = ShardingCodecIndexLocation.end) Bases: :py:obj:`zarr.abc.codec.ArrayBytesCodec`, :py:obj:`zarr.abc.codec.ArrayBytesCodecPartialDecodeMixin`, :py:obj:`zarr.abc.codec.ArrayBytesCodecPartialEncodeMixin` Base class for array-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, shard_spec: zarr.core.array_spec.ArraySpec) -> int Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: decode_partial(batch_info: collections.abc.Iterable[tuple[zarr.abc.store.ByteGetter, zarr.core.indexing.SelectorTuple, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[zarr.core.buffer.NDBuffer | None] :async: Partially decodes a batch of chunks. This method determines parts of a chunk from the slice selection, fetches these parts from the store (via ByteGetter) and decodes them. :Parameters: **batch_info** : Iterable[tuple[ByteGetter, SelectorTuple, ArraySpec]] Ordered set of information about slices of encoded chunks. The slice selection determines which parts of the chunk will be fetched. The ByteGetter is used to fetch the necessary bytes. The chunk spec contains information about the construction of an array from the bytes. :Returns: Iterable[NDBuffer | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: encode_partial(batch_info: collections.abc.Iterable[tuple[zarr.abc.store.ByteSetter, zarr.core.buffer.NDBuffer, zarr.core.indexing.SelectorTuple, zarr.core.array_spec.ArraySpec]]) -> None :async: Partially encodes a batch of chunks. This method determines parts of a chunk from the slice selection, encodes them and writes these parts to the store (via ByteSetter). If merging with existing chunk data in the store is necessary, this method will read from the store first and perform the merge. :Parameters: **batch_info** : Iterable[tuple[ByteSetter, NDBuffer, SelectorTuple, ArraySpec]] Ordered set of information about slices of to-be-encoded chunks. The slice selection determines which parts of the chunk will be encoded. The ByteSetter is used to write the necessary bytes and fetch bytes for existing chunk data. The chunk spec contains information about the chunk. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: chunk_shape :type: zarr.core.common.ChunkCoords .. py:property:: codec_pipeline :type: zarr.abc.codec.CodecPipeline .. py:attribute:: codecs :type: tuple[zarr.abc.codec.Codec, Ellipsis] .. py:attribute:: index_codecs :type: tuple[zarr.abc.codec.Codec, Ellipsis] .. py:attribute:: index_location :type: ShardingCodecIndexLocation .. py:attribute:: is_fixed_size :type: bool .. py:class:: ShardingCodecIndexLocation(*args, **kwds) Bases: :py:obj:`enum.Enum` Enum for index location used by the sharding codec. .. !! processed by numpydoc !! .. py:attribute:: end :value: 'end' .. py:attribute:: start :value: 'start' .. py:class:: TransposeCodec(*, order: zarr.core.common.ChunkCoordsLike) Bases: :py:obj:`zarr.abc.codec.ArrayArrayCodec` Base class for array-to-array codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(shape: tuple[int, Ellipsis], dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: is_fixed_size :value: True .. py:attribute:: order :type: tuple[int, Ellipsis] .. py:class:: VLenBytesCodec Bases: :py:obj:`zarr.abc.codec.ArrayBytesCodec` Base class for array-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int :abstractmethod: Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: is_fixed_size :type: bool .. py:class:: VLenUTF8Codec Bases: :py:obj:`zarr.abc.codec.ArrayBytesCodec` Base class for array-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int :abstractmethod: Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: is_fixed_size :type: bool .. py:class:: ZstdCodec(*, level: int = 0, checksum: bool = False) Bases: :py:obj:`zarr.abc.codec.BytesBytesCodec` Base class for bytes-to-bytes codecs. .. !! processed by numpydoc !! .. py:method:: compute_encoded_size(_input_byte_length: int, _chunk_spec: zarr.core.array_spec.ArraySpec) -> int :abstractmethod: Given an input byte length, this method returns the output byte length. Raises a NotImplementedError for codecs with variable-sized outputs (e.g. compressors). :Parameters: **input_byte_length** : int .. **chunk_spec** : ArraySpec .. :Returns: int .. .. !! processed by numpydoc !! .. py:method:: decode(chunks_and_specs: collections.abc.Iterable[tuple[CodecOutput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecInput | None] :async: Decodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecOutput | None, ArraySpec]] Ordered set of encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecInput | None] .. .. !! processed by numpydoc !! .. py:method:: encode(chunks_and_specs: collections.abc.Iterable[tuple[CodecInput | None, zarr.core.array_spec.ArraySpec]]) -> collections.abc.Iterable[CodecOutput | None] :async: Encodes a batch of chunks. Chunks can be None in which case they are ignored by the codec. :Parameters: **chunks_and_specs** : Iterable[tuple[CodecInput | None, ArraySpec]] Ordered set of to-be-encoded chunks with their accompanying chunk spec. :Returns: Iterable[CodecOutput | None] .. .. !! processed by numpydoc !! .. py:method:: evolve_from_array_spec(array_spec: zarr.core.array_spec.ArraySpec) -> Self Fills in codec configuration parameters that can be automatically inferred from the array metadata. :Parameters: **array_spec** : ArraySpec .. :Returns: Self .. .. !! processed by numpydoc !! .. py:method:: from_dict(data: dict[str, zarr.core.common.JSON]) -> Self :classmethod: Create an instance of the model from a dictionary .. !! processed by numpydoc !! .. py:method:: resolve_metadata(chunk_spec: zarr.core.array_spec.ArraySpec) -> zarr.core.array_spec.ArraySpec Computed the spec of the chunk after it has been encoded by the codec. This is important for codecs that change the shape, data type or fill value of a chunk. The spec will then be used for subsequent codecs in the pipeline. :Parameters: **chunk_spec** : ArraySpec .. :Returns: ArraySpec .. .. !! processed by numpydoc !! .. py:method:: to_dict() -> dict[str, zarr.core.common.JSON] Recursively serialize this model to a dictionary. This method inspects the fields of self and calls `x.to_dict()` for any fields that are instances of `Metadata`. Sequences of `Metadata` are similarly recursed into, and the output of that recursion is collected in a list. .. !! processed by numpydoc !! .. py:method:: validate(*, shape: zarr.core.common.ChunkCoords, dtype: zarr.core.dtype.wrapper.ZDType[zarr.core.dtype.wrapper.TBaseDType, zarr.core.dtype.wrapper.TBaseScalar], chunk_grid: zarr.core.chunk_grids.ChunkGrid) -> None Validates that the codec configuration is compatible with the array metadata. Raises errors when the codec configuration is not compatible. :Parameters: **shape** : ChunkCoords The array shape **dtype** : np.dtype[Any] The array data type **chunk_grid** : ChunkGrid The array chunk grid .. !! processed by numpydoc !! .. py:attribute:: checksum :type: bool :value: False .. py:attribute:: is_fixed_size :value: True .. py:attribute:: level :type: int :value: 0