value1 + (value2 - value1) * amount
.
- Passing amount a value of 0 will cause value1 to be returned, a value of 1 will cause value2 to be returned.
- See ((1 - amount) * value1) + (value2 * amount)
.
- Passing amount a value of 0 will cause value1 to be returned, a value of 1 will cause value2 to be returned.
- This method does not have the floating point precision issue that
- var name = await KeyboardInput.Show("Name", "What's your name?", "Player");
-
-
- var nameTask = KeyboardInput.Show("Name", "What's your name?", "Player");
- KeyboardInput.Cancel("John Doe");
- var name = await nameTask;
-
-
- var color = await MessageBox.Show("Color", "What's your favorite color?", new[] { "Red", "Green", "Blue" });
-
-
- var colorTask = MessageBox.Show("Color", "What's your favorite color?", new[] { "Red", "Green", "Blue" });
- MessageBox.Cancel(0);
- var color = await colorTask;
-
- true
.
- On OpenGL platforms, it is true
if both framebuffer sRGB
- and texture sRGB are supported.
- true
.
- For OpenGL Desktop platforms it is always true
.
- For OpenGL Mobile platforms it requires `GL_EXT_color_buffer_float`.
- If the requested format is not supported an NotSupportedException
- will be thrown.
- true
.
- For OpenGL Desktop platforms it is always true
.
- For OpenGL Mobile platforms it requires `GL_EXT_color_buffer_half_float`.
- If the requested format is not supported an NotSupportedException
- will be thrown.
- true
the false
it will instead do a
- soft full screen by maximizing the window and making it borderless.
- Using this operation it is easy to get certain vertex elements from a VertexBuffer.
-
- For example to get the texture coordinates from a VertexBuffer of
- Vector3[] positions = new Vector3[numVertices];
- vertexBuffer.SetData(0, positions, 0, numVertices, vertexBuffer.VertexDeclaration.VertexStride);
-
-
- Continuing from the previous example, if you want to set only the texture coordinate component of the vertex data,
- you would call this method as follows (note the use of
- Vector2[] texCoords = new Vector2[numVertices];
- vertexBuffer.SetData(12, texCoords, 0, numVertices, vertexBuffer.VertexDeclaration.VertexStride);
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (Stream compressor = new ZlibStream(raw, CompressionMode.Compress))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".zlib")
- Using compressor As Stream = New ZlibStream(raw, CompressionMode.Compress)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (Stream compressor = new ZlibStream(raw,
- CompressionMode.Compress,
- CompressionLevel.BestCompression))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".zlib")
- Using compressor As Stream = New ZlibStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (var output = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (Stream compressor = new ZlibStream(output, CompressionMode.Compress, CompressionLevel.BestCompression, true))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- // can write additional data to the output stream here
- }
-
-
- Using output As FileStream = File.Create(fileToCompress & ".zlib")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using compressor As Stream = New ZlibStream(output, CompressionMode.Compress, CompressionLevel.BestCompression, True)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- ' can write additional data to the output stream here.
- End Using
-
-
- private void InflateBuffer()
- {
- int bufferSize = 1024;
- byte[] buffer = new byte[bufferSize];
- ZlibCodec decompressor = new ZlibCodec();
-
- Console.WriteLine("\n============================================");
- Console.WriteLine("Size of Buffer to Inflate: {0} bytes.", CompressedBytes.Length);
- MemoryStream ms = new MemoryStream(DecompressedBytes);
-
- int rc = decompressor.InitializeInflate();
-
- decompressor.InputBuffer = CompressedBytes;
- decompressor.NextIn = 0;
- decompressor.AvailableBytesIn = CompressedBytes.Length;
-
- decompressor.OutputBuffer = buffer;
-
- // pass 1: inflate
- do
- {
- decompressor.NextOut = 0;
- decompressor.AvailableBytesOut = buffer.Length;
- rc = decompressor.Inflate(FlushType.None);
-
- if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
- throw new Exception("inflating: " + decompressor.Message);
-
- ms.Write(decompressor.OutputBuffer, 0, buffer.Length - decompressor.AvailableBytesOut);
- }
- while (decompressor.AvailableBytesIn > 0 || decompressor.AvailableBytesOut == 0);
-
- // pass 2: finish and flush
- do
- {
- decompressor.NextOut = 0;
- decompressor.AvailableBytesOut = buffer.Length;
- rc = decompressor.Inflate(FlushType.Finish);
-
- if (rc != ZlibConstants.Z_STREAM_END && rc != ZlibConstants.Z_OK)
- throw new Exception("inflating: " + decompressor.Message);
-
- if (buffer.Length - decompressor.AvailableBytesOut > 0)
- ms.Write(buffer, 0, buffer.Length - decompressor.AvailableBytesOut);
- }
- while (decompressor.AvailableBytesIn > 0 || decompressor.AvailableBytesOut == 0);
-
- decompressor.EndInflate();
- }
-
-
-
- int bufferSize = 40000;
- byte[] CompressedBytes = new byte[bufferSize];
- byte[] DecompressedBytes = new byte[bufferSize];
-
- ZlibCodec compressor = new ZlibCodec();
-
- compressor.InitializeDeflate(CompressionLevel.Default);
-
- compressor.InputBuffer = System.Text.ASCIIEncoding.ASCII.GetBytes(TextToCompress);
- compressor.NextIn = 0;
- compressor.AvailableBytesIn = compressor.InputBuffer.Length;
-
- compressor.OutputBuffer = CompressedBytes;
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = CompressedBytes.Length;
-
- while (compressor.TotalBytesIn != TextToCompress.Length && compressor.TotalBytesOut < bufferSize)
- {
- compressor.Deflate(FlushType.None);
- }
-
- while (true)
- {
- int rc= compressor.Deflate(FlushType.Finish);
- if (rc == ZlibConstants.Z_STREAM_END) break;
- }
-
- compressor.EndDeflate();
-
-
-
- private void DeflateBuffer(CompressionLevel level)
- {
- int bufferSize = 1024;
- byte[] buffer = new byte[bufferSize];
- ZlibCodec compressor = new ZlibCodec();
-
- Console.WriteLine("\n============================================");
- Console.WriteLine("Size of Buffer to Deflate: {0} bytes.", UncompressedBytes.Length);
- MemoryStream ms = new MemoryStream();
-
- int rc = compressor.InitializeDeflate(level);
-
- compressor.InputBuffer = UncompressedBytes;
- compressor.NextIn = 0;
- compressor.AvailableBytesIn = UncompressedBytes.Length;
-
- compressor.OutputBuffer = buffer;
-
- // pass 1: deflate
- do
- {
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = buffer.Length;
- rc = compressor.Deflate(FlushType.None);
-
- if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
- throw new Exception("deflating: " + compressor.Message);
-
- ms.Write(compressor.OutputBuffer, 0, buffer.Length - compressor.AvailableBytesOut);
- }
- while (compressor.AvailableBytesIn > 0 || compressor.AvailableBytesOut == 0);
-
- // pass 2: finish and flush
- do
- {
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = buffer.Length;
- rc = compressor.Deflate(FlushType.Finish);
-
- if (rc != ZlibConstants.Z_STREAM_END && rc != ZlibConstants.Z_OK)
- throw new Exception("deflating: " + compressor.Message);
-
- if (buffer.Length - compressor.AvailableBytesOut > 0)
- ms.Write(buffer, 0, buffer.Length - compressor.AvailableBytesOut);
- }
- while (compressor.AvailableBytesIn > 0 || compressor.AvailableBytesOut == 0);
-
- compressor.EndDeflate();
-
- ms.Seek(0, SeekOrigin.Begin);
- CompressedBytes = new byte[compressor.TotalBytesOut];
- ms.Read(CompressedBytes, 0, CompressedBytes.Length);
- }
-
-
- var adler = Adler.Adler32(0, null, 0, 0);
- adler = Adler.Adler32(adler, buffer, index, length);
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(outputFile))
- {
- using (Stream compressor = new GZipStream(raw, CompressionMode.Compress))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Dim outputFile As String = (fileToCompress & ".compressed")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(outputFile)
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- private void GunZipFile(string filename)
- {
- if (!filename.EndsWith(".gz))
- throw new ArgumentException("filename");
- var DecompressedFile = filename.Substring(0,filename.Length-3);
- byte[] working = new byte[WORKING_BUFFER_SIZE];
- int n= 1;
- using (System.IO.Stream input = System.IO.File.OpenRead(filename))
- {
- using (Stream decompressor= new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, true))
- {
- using (var output = System.IO.File.Create(DecompressedFile))
- {
- while (n !=0)
- {
- n= decompressor.Read(working, 0, working.Length);
- if (n > 0)
- {
- output.Write(working, 0, n);
- }
- }
- }
- }
- }
- }
-
-
-
- Private Sub GunZipFile(ByVal filename as String)
- If Not (filename.EndsWith(".gz)) Then
- Throw New ArgumentException("filename")
- End If
- Dim DecompressedFile as String = filename.Substring(0,filename.Length-3)
- Dim working(WORKING_BUFFER_SIZE) as Byte
- Dim n As Integer = 1
- Using input As Stream = File.OpenRead(filename)
- Using decompressor As Stream = new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, True)
- Using output As Stream = File.Create(UncompressedFile)
- Do
- n= decompressor.Read(working, 0, working.Length)
- If n > 0 Then
- output.Write(working, 0, n)
- End IF
- Loop While (n > 0)
- End Using
- End Using
- End Using
- End Sub
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".gz"))
- {
- using (Stream compressor = new GZipStream(raw,
- CompressionMode.Compress,
- CompressionLevel.BestCompression))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".gz")
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(outputFile))
- {
- using (Stream compressor = new GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression, true))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Dim outputFile As String = (fileToCompress & ".compressed")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(outputFile)
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression, True)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- byte[] working = new byte[WORKING_BUFFER_SIZE];
- using (System.IO.Stream input = System.IO.File.OpenRead(_CompressedFile))
- {
- using (Stream decompressor= new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, true))
- {
- using (var output = System.IO.File.Create(_DecompressedFile))
- {
- int n;
- while ((n= decompressor.Read(working, 0, working.Length)) !=0)
- {
- output.Write(working, 0, n);
- }
- }
- }
- }
-
- The
A display subsystem is often referred to as a video card, however, on some machines the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Windows?Phone?8: This API is supported.
-Gets a DXGI 1.0 description of an adapter (or video card).
-Graphics apps can use the DXGI API to retrieve an accurate set of graphics memory values on systems that have Windows Display Driver Model (WDDM) drivers. The following are the critical steps involved.
HasWDDMDriver() - { LPDIRECT3DCREATE9EX pD3D9Create9Ex =null ; HMODULE hD3D9 =null ; hD3D9 = LoadLibrary( L"d3d9.dll" ); if (null == hD3D9 ) { return false; } // /* Try to create IDirect3D9Ex interface (also known as a DX9L interface). This interface can only be created if the driver is a WDDM driver. */ // pD3D9Create9Ex = (LPDIRECT3DCREATE9EX) GetProcAddress( hD3D9, "Direct3DCreate9Ex" ); return pD3D9Create9Ex !=null ; - }
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); - * pDXGIAdapter; - pDXGIDevice->GetAdapter(&pDXGIAdapter); - adapterDesc; - pDXGIAdapter->GetDesc(&adapterDesc);
Enumerate adapter (video card) outputs.
-The index of the output.
The address of a reference to an
A code that indicates success or failure (see DXGI_ERROR).
If the adapter came from a device created using
When the EnumOutputs method succeeds and fills the ppOutput parameter with the address of the reference to the output interface, EnumOutputs increments the output interface's reference count. To avoid a memory leak, when you finish using the output interface, call the Release method to decrement the reference count.
EnumOutputs first returns the output on which the desktop primary is displayed. This output corresponds with an index of zero. EnumOutputs then returns other outputs.
-Gets a DXGI 1.0 description of an adapter (or video card).
-A reference to a
Returns
Graphics apps can use the DXGI API to retrieve an accurate set of graphics memory values on systems that have Windows Display Driver Model (WDDM) drivers. The following are the critical steps involved.
HasWDDMDriver() - { LPDIRECT3DCREATE9EX pD3D9Create9Ex =null ; HMODULE hD3D9 =null ; hD3D9 = LoadLibrary( L"d3d9.dll" ); if (null == hD3D9 ) { return false; } // /* Try to create IDirect3D9Ex interface (also known as a DX9L interface). This interface can only be created if the driver is a WDDM driver. */ // pD3D9Create9Ex = (LPDIRECT3DCREATE9EX) GetProcAddress( hD3D9, "Direct3DCreate9Ex" ); return pD3D9Create9Ex !=null ; - }
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); - * pDXGIAdapter; - pDXGIDevice->GetAdapter(&pDXGIAdapter); - adapterDesc; - pDXGIAdapter->GetDesc(&adapterDesc);
Checks whether the system supports a device interface for a graphics component.
-The
The user mode driver version of InterfaceName. This is returned only if the interface is supported, otherwise this parameter will be
An
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Returns the adapter for the specified device.
-If the GetAdapter method succeeds, the reference count on the adapter interface will be incremented. To avoid a memory leak, be sure to release the interface when you are finished using it.
-Gets or sets the GPU thread priority.
-Returns the adapter for the specified device.
-The address of an
Returns
If the GetAdapter method succeeds, the reference count on the adapter interface will be incremented. To avoid a memory leak, be sure to release the interface when you are finished using it.
-Returns a surface. This method is used internally and you should not call it directly in your application.
-A reference to a
The number of surfaces to create.
A DXGI_USAGE flag that specifies how the surface is expected to be used.
An optional reference to a
The address of an
Returns
The CreateSurface method creates a buffer to exchange data between one or more devices. It is used internally, and you should not directly call it.
The runtime automatically creates an
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Sets the GPU thread priority.
-A value that specifies the required GPU thread priority. This value must be between -7 and 7, inclusive, where 0 represents normal priority.
Return
The values for the Priority parameter function as follows:
To use the SetGPUThreadPriority method, you should have a comprehensive understanding of GPU scheduling. You should profile your application to ensure that it behaves as intended. If used inappropriately, the SetGPUThreadPriority method can impede rendering speed and result in a poor user experience.
-Gets the GPU thread priority.
-A reference to a variable that receives a value that indicates the current GPU thread priority. The value will be between -7 and 7, inclusive, where 0 represents normal priority.
Return
Inherited from objects that are tied to the device so that they can retrieve a reference to it.
-Windows?Phone?8: This API is supported.
-Retrieves the device.
-The reference id for the device.
The address of a reference to the device.
A code that indicates success or failure (see DXGI_ERROR).
The type of interface that is returned can be any interface published by the device. For example, it could be an
An
Windows?Phone?8: This API is supported.
-Sets application-defined data to the object and associates that data with a
A
The size of the object's data.
A reference to the object's data.
Returns one of the DXGI_ERROR values.
SetPrivateData makes a copy of the specified data and stores it with the object.
Private data that SetPrivateData stores in the object occupies the same storage space as private data that is stored by associated Direct3D objects (for example, by a Microsoft Direct3D?11 device through
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the well-known private data
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(, sizeof( c_szName ) - 1, c_szName ); -
You can use
Set an interface in the object's private data.
-A
The interface to set.
Returns one of the following DXGI_ERROR.
This API associates an interface reference with the object.
When the interface is set its reference count is incremented. When the data are overwritten (by calling SPD or SPDI with the same
Get a reference to the object's data.
-A
The size of the data.
Pointer to the data.
Returns one of the following DXGI_ERROR.
If the data returned is a reference to an
You can pass GUID_DeviceType in the Name parameter of GetPrivateData to retrieve the device type from the display adapter object (
To get the type of device on which the display adapter was created
On Windows?7 or earlier, this type is either a value from D3D10_DRIVER_TYPE or
Gets the parent of the object.
-The ID of the requested interface.
The address of a reference to the parent object.
Returns one of the DXGI_ERROR values.
An
Create a factory by calling CreateDXGIFactory.
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain. You can request the
* pDXGIDevice = nullptr; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter = nullptr; - hr = pDXGIDevice->GetAdapter( &pDXGIAdapter ); * pIDXGIFactory = nullptr; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory);
Windows?Phone?8: This API is supported.
-Enumerates the adapters (video cards).
-The index of the adapter to enumerate.
The address of a reference to an
Returns
When you create a factory, the factory enumerates the set of adapters that are available in the system. Therefore, if you change the adapters in a system, you must destroy and recreate the
When the EnumAdapters method succeeds and fills the ppAdapter parameter with the address of the reference to the adapter interface, EnumAdapters increments the adapter interface's reference count. When you finish using the adapter interface, call the Release method to decrement the reference count before you destroy the reference.
EnumAdapters first returns the adapter with the output on which the desktop primary is displayed. This adapter corresponds with an index of zero. EnumAdapters next returns other adapters with outputs. EnumAdapters finally returns adapters without outputs.
-Allows DXGI to monitor an application's message queue for the alt-enter key sequence (which causes the application to switch from windowed to full screen or vice versa).
-The handle of the window that is to be monitored. This parameter can be
One or more of the following values:
The combination of WindowHandle and Flags informs DXGI to stop monitoring window messages for the previously-associated window.
If the application switches to full-screen mode, DXGI will choose a full-screen resolution to be the smallest supported resolution that is larger or the same size as the current back buffer size.
Applications can make some changes to make the transition from windowed to full screen more efficient. For example, on a WM_SIZE message, the application should release any outstanding swap-chain back buffers, call
While windowed, the application can, if it chooses, restrict the size of its window's client area to sizes to which it is comfortable rendering. A fully flexible application would make no such restriction, but UI elements or other design considerations can, of course, make this flexibility untenable. If the application further chooses to restrict its window's client area to just those that match supported full-screen resolutions, the application can field WM_SIZING, then check against
Applications that want to handle mode changes or Alt+Enter themselves should call MakeWindowAssociation with the
Get the window through which the user controls the transition to and from full screen.
-A reference to a window handle.
[Starting with Direct3D 11.1, we recommend not to use CreateSwapChain anymore to create a swap chain. Instead, use CreateSwapChainForHwnd, CreateSwapChainForCoreWindow, or CreateSwapChainForComposition depending on how you want to create the swap chain.]
Creates a swap chain.
-
If you attempt to create a swap chain in full-screen mode, and full-screen mode is unavailable, the swap chain will be created in windowed mode and
If the buffer width or the buffer height is zero, the sizes will be inferred from the output window size in the swap-chain description.
Because the target output can't be chosen explicitly when the swap chain is created, we recommend not to create a full-screen swap chain. This can reduce presentation performance if the swap chain size and the output window size do not match. Here are two ways to ensure that the sizes match:
If the swap chain is in full-screen mode, before you release it you must use SetFullscreenState to switch it to windowed mode. For more information about releasing a swap chain, see the "Destroying a Swap Chain" section of DXGI Overview.
After the runtime renders the initial frame in full screen, the runtime might unexpectedly exit full screen during a call to
// Detect if newly created full-screen swap chain isn't actually full screen. -* pTarget; bFullscreen; - if (SUCCEEDED(pSwapChain->GetFullscreenState(&bFullscreen, &pTarget))) - { pTarget->Release(); - } - else bFullscreen = ; - // If not full screen, enable full screen again. - if (!bFullscreen) - { ShowWindow(hWnd, SW_MINIMIZE); ShowWindow(hWnd, SW_RESTORE); pSwapChain->SetFullscreenState(TRUE, null ); - } -
You can specify
However, to use stereo presentation and to change resize behavior for the flip model, applications must use the
Create an adapter interface that represents a software adapter.
-Handle to the software adapter's dll. HMODULE can be obtained with GetModuleHandle or LoadLibrary.
Address of a reference to an adapter (see
A software adapter is a DLL that implements the entirety of a device driver interface, plus emulation, if necessary, of kernel-mode graphics components for Windows. Details on implementing a software adapter can be found in the Windows Vista Driver Development Kit. This is a very complex development task, and is not recommended for general readers.
Calling this method will increment the module's reference count by one. The reference count can be decremented by calling FreeLibrary.
The typical calling scenario is to call LoadLibrary, pass the handle to CreateSoftwareAdapter, then immediately call FreeLibrary on the DLL and forget the DLL's HMODULE. Since the software adapter calls FreeLibrary when it is destroyed, the lifetime of the DLL will now be owned by the adapter, and the application is free of any further consideration of its lifetime.
-The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
To create a factory, call the CreateDXGIFactory1 function.
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain.
- You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory); -
Informs an application of the possible need to re-enumerate adapters.
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
-Enumerates both adapters (video cards) with or without outputs.
-The index of the adapter to enumerate.
The address of a reference to an
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
When you create a factory, the factory enumerates the set of adapters that are available in the system. Therefore, if you change the adapters in a system, you must destroy and recreate the
When the EnumAdapters1 method succeeds and fills the ppAdapter parameter with the address of the reference to the adapter interface, EnumAdapters1 increments the adapter interface's reference count. When you finish using the adapter interface, call the Release method to decrement the reference count before you destroy the reference.
EnumAdapters1 first returns the adapter with the output on which the desktop primary is displayed. This adapter corresponds with an index of zero. EnumAdapters1 next returns other adapters with outputs. EnumAdapters1 finally returns adapters without outputs.
-Informs an application of the possible need to re-enumerate adapters.
-IsCurrent returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
- The
To create a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 factory interface, pass
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain.
- You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory); -
Determines whether to use stereo mode.
-We recommend that windowed applications call IsWindowedStereoEnabled before they attempt to use stereo. IsWindowedStereoEnabled returns TRUE if both of the following items are true:
The creation of a windowed stereo swap chain succeeds if the first requirement is met. However, if the adapter can't scan out stereo, the output on that adapter is reduced to mono.
The Direct3D 11.1 Simple Stereo 3D Sample shows how to add a stereoscopic 3D effect and how to respond to system stereo changes.
-Determines whether to use stereo mode.
-Indicates whether to use stereo mode. TRUE indicates that you can use stereo mode; otherwise,
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsWindowedStereoEnabled always returns
We recommend that windowed applications call IsWindowedStereoEnabled before they attempt to use stereo. IsWindowedStereoEnabled returns TRUE if both of the following items are true:
The creation of a windowed stereo swap chain succeeds if the first requirement is met. However, if the adapter can't scan out stereo, the output on that adapter is reduced to mono.
The Direct3D 11.1 Simple Stereo 3D Sample shows how to add a stereoscopic 3D effect and how to respond to system stereo changes.
-Creates a swap chain that is associated with an
CreateSwapChainForHwnd returns:
Platform Update for Windows?7:??
If you specify the width, height, or both (Width and Height members of
Because you can associate only one flip presentation model swap chain at a time with an
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Creates a swap chain that is associated with the CoreWindow object for the output window for the swap chain.
-CreateSwapChainForCoreWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, CreateSwapChainForCoreWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
If you specify the width, height, or both (Width and Height members of
Because you can associate only one flip presentation model swap chain (per layer) at a time with a CoreWindow, the Microsoft Direct3D?11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain. For more info about this situation, see Deferred Destruction Issues with Flip Presentation Swap Chains.
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Identifies the adapter on which a shared resource object was created.
-A handle to a shared resource object. The
A reference to a variable that receives a locally unique identifier (
GetSharedResourceAdapterLuid returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetSharedResourceAdapterLuid fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
You cannot share resources across adapters. Therefore, you cannot open a shared resource on an adapter other than the adapter on which the resource was created. Call GetSharedResourceAdapterLuid before you open a shared resource to ensure that the resource was created on the appropriate adapter. To open a shared resource, call the
Registers an application window to receive notification messages of changes of stereo status.
-The handle of the window to send a notification message to when stereo status change occurs.
Identifies the notification message to send.
A reference to a key value that an application can pass to the
RegisterStereoStatusWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterStereoStatusWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Registers to receive notification of changes in stereo status by using event signaling.
-A handle to the event object that the operating system sets when notification of stereo status change occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
RegisterStereoStatusEvent returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterStereoStatusEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Unregisters a window or an event to stop it from receiving notification when stereo status changes.
-A key value for the window or event to unregister. The
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, UnregisterStereoStatus has no effect. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Registers an application window to receive notification messages of changes of occlusion status.
-The handle of the window to send a notification message to when occlusion status change occurs.
Identifies the notification message to send.
A reference to a key value that an application can pass to the
RegisterOcclusionStatusWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterOcclusionStatusWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Apps choose the Windows message that Windows sends when occlusion status changes.
-Registers to receive notification of changes in occlusion status by using event signaling.
-A handle to the event object that the operating system sets when notification of occlusion status change occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
RegisterOcclusionStatusEvent returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterOcclusionStatusEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
If you call RegisterOcclusionStatusEvent multiple times with the same event handle, RegisterOcclusionStatusEvent fails with
If you call RegisterOcclusionStatusEvent multiple times with the different event handles, RegisterOcclusionStatusEvent properly registers the events.
-Unregisters a window or an event to stop it from receiving notification when occlusion status changes.
-A key value for the window or event to unregister. The
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, UnregisterOcclusionStatus has no effect. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Creates a swap chain that you can use to send Direct3D content into the DirectComposition API or the Windows.UI.Xaml framework to compose in a window.
-CreateSwapChainForComposition returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, CreateSwapChainForComposition fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
You can use composition swap chains with either DirectComposition?s
The
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Enables creating Microsoft DirectX Graphics Infrastructure (DXGI) objects.
- Outputs the
Returns
For Direct3D 12, it's no longer possible to backtrack from a device to the
Provides an adapter which can be provided to
The globally unique identifier (
The address of an
Returns
For more information, see DXGI 1.4 Improvements.
-Identifies the type of DXGI adapter.
-The
Specifies no flags.
Value always set to 0. This flag is reserved.
Specifies a software adapter. For more info about this flag, see new info in Windows?8 about enumerating adapters.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Identifies the type of DXGI adapter.
-The
Specifies no flags.
Value always set to 0. This flag is reserved.
Specifies a software adapter. For more info about this flag, see new info in Windows?8 about enumerating adapters.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Forces this enumeration to compile to 32 bits in size. Without this value, some compilers would allow this enumeration to compile to a size other than 32 bits. This value is not used.
Identifies the alpha value, transparency behavior, of a surface.
-For more information about alpha mode, see
Indicates that the transparency behavior is not specified.
Indicates that the transparency behavior is premultiplied. Each color is first scaled by the alpha value. The alpha value itself is the same in both straight and premultiplied alpha. Typically, no color channel value is greater than the alpha channel value. If a color channel value in a premultiplied format is greater than the alpha channel, the standard source-over blending math results in an additive blend.
Indicates that the transparency behavior is not premultiplied. The alpha channel indicates the transparency of the color.
Indicates to ignore the transparency behavior.
Specifies color space types.
-This enum is used within DXGI in the CheckColorSpaceSupport, SetColorSpace1 and CheckOverlayColorSpaceSupport methods. It is also referenced in D3D11 video methods such as
The following color parameters are defined:
-Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for sRGB. Note that this is often implemented with a linear segment, but in that case the exponent is corrected to stay aligned with a gamma 2.2 curve. This is usually used with 8 bit and 10 bit color channels. -
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 1.0 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for scRGB, and is usually used with 16 bit integer, 16 bit floating point, and 32 bit floating point channels. -
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for ITU-R Recommendation BT.709. Note that due to the inclusion of a linear segment, the transfer curve looks similar to a pure exponential gamma of 1.9. This is usually used with 8 bit and 10 bit color channels. -
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels. -
Reserved.
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
Transfer | BT.601 |
?
This definition is commonly used for JPG, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.601 |
?
This definition is commonly used for MPEG2, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.601 |
?
This is sometimes used for H.264 camera capture, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.709 |
?
This definition is commonly used for H.264 and HEVC, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.709 |
?
This is sometimes used for H.264 camera capture, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This definition may be used by HEVC, and is usually used with 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2084 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2084 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2084 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2084 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
A custom color definition is used.
A custom color definition is used.
Identifies the granularity at which the graphics processing unit (GPU) can be preempted from performing its current compute task.
-You call the
Indicates the preemption granularity as a compute packet.
Indicates the preemption granularity as a dispatch (for example, a call to the
Indicates the preemption granularity as a thread group. A thread group is a part of a dispatch.
Indicates the preemption granularity as a thread in a thread group. A thread is a part of a thread group.
Indicates the preemption granularity as a compute instruction in a thread.
Flags that indicate how the back buffers should be rotated to fit the physical rotation of a monitor.
-Unspecified rotation.
Specifies no rotation.
Specifies 90 degrees of rotation.
Specifies 180 degrees of rotation.
Specifies 270 degrees of rotation.
Flags indicating how an image is stretched to fit a given monitor's resolution.
-Selecting the CENTERED or STRETCHED modes can result in a mode change even if you specify the native resolution of the display in the
This enum is used by the
Unspecified scaling.
Specifies no scaling. The image is centered on the display. This flag is typically used for a fixed-dot-pitch display (such as an LED display).
Specifies stretched scaling.
Flags indicating the method the raster uses to create an image on a surface.
-This enum is used by the
Scanline order is unspecified.
The image is created from the first scanline to the last without skipping any.
The image is created beginning with the upper field.
The image is created beginning with the lower field.
Status codes that can be returned by DXGI functions.
-The
#define _FACDXGI 0x87a - #define MAKE_DXGI_STATUS(code) MAKE_HRESULT(0, _FACDXGI, code) -
For example,
#define-MAKE_DXGI_STATUS(1) -
Specifies a range of hardware features, to be used when checking for feature support.
-This enum is used by the CheckFeatureSupport method.
-The display supports tearing, a requirement of variable refresh rate displays.
Resource data formats, including fully-typed and typeless formats. A list of modifiers at the bottom of the page more fully describes each format type.
-The format is not known.
A four-component, 128-bit typeless format that supports 32 bits per channel including alpha. ?
A four-component, 128-bit floating-point format that supports 32 bits per channel including alpha. 1,5,8
A four-component, 128-bit unsigned-integer format that supports 32 bits per channel including alpha. ?
A four-component, 128-bit signed-integer format that supports 32 bits per channel including alpha. ?
A three-component, 96-bit typeless format that supports 32 bits per color channel.
A three-component, 96-bit floating-point format that supports 32 bits per color channel.5,8
A three-component, 96-bit unsigned-integer format that supports 32 bits per color channel.
A three-component, 96-bit signed-integer format that supports 32 bits per color channel.
A four-component, 64-bit typeless format that supports 16 bits per channel including alpha.
A four-component, 64-bit floating-point format that supports 16 bits per channel including alpha.5,7
A four-component, 64-bit unsigned-normalized-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit unsigned-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit signed-normalized-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit signed-integer format that supports 16 bits per channel including alpha.
A two-component, 64-bit typeless format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit floating-point format that supports 32 bits for the red channel and 32 bits for the green channel.5,8
A two-component, 64-bit unsigned-integer format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit signed-integer format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit typeless format that supports 32 bits for the red channel, 8 bits for the green channel, and 24 bits are unused.
A 32-bit floating-point component, and two unsigned-integer components (with an additional 32 bits). This format supports 32-bit depth, 8-bit stencil, and 24 bits are unused.?
A 32-bit floating-point component, and two typeless components (with an additional 32 bits). This format supports 32-bit red channel, 8 bits are unused, and 24 bits are unused.?
A 32-bit typeless component, and two unsigned-integer components (with an additional 32 bits). This format has 32 bits unused, 8 bits for green channel, and 24 bits are unused.
A four-component, 32-bit typeless format that supports 10 bits for each color and 2 bits for alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 10 bits for each color and 2 bits for alpha.
A four-component, 32-bit unsigned-integer format that supports 10 bits for each color and 2 bits for alpha.
Three partial-precision floating-point numbers encoded into a single 32-bit value (a variant of s10e5, which is sign bit, 10-bit mantissa, and 5-bit biased (15) exponent). There are no sign bits, and there is a 5-bit biased (15) exponent for each channel, 6-bit mantissa for R and G, and a 5-bit mantissa for B, as shown in the following illustration.5,7
A four-component, 32-bit typeless format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-normalized integer sRGB format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit signed-normalized-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit signed-integer format that supports 8 bits per channel including alpha.
A two-component, 32-bit typeless format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit floating-point format that supports 16 bits for the red channel and 16 bits for the green channel.5,7
A two-component, 32-bit unsigned-normalized-integer format that supports 16 bits each for the green and red channels.
A two-component, 32-bit unsigned-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit signed-normalized-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit signed-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A single-component, 32-bit typeless format that supports 32 bits for the red channel.
A single-component, 32-bit floating-point format that supports 32 bits for depth.5,8
A single-component, 32-bit floating-point format that supports 32 bits for the red channel.5,8
A single-component, 32-bit unsigned-integer format that supports 32 bits for the red channel.
A single-component, 32-bit signed-integer format that supports 32 bits for the red channel.
A two-component, 32-bit typeless format that supports 24 bits for the red channel and 8 bits for the green channel.
A 32-bit z-buffer format that supports 24 bits for depth and 8 bits for stencil.
A 32-bit format, that contains a 24 bit, single-component, unsigned-normalized integer, with an additional typeless 8 bits. This format has 24 bits red channel and 8 bits unused.
A 32-bit format, that contains a 24 bit, single-component, typeless format, with an additional 8 bit unsigned integer component. This format has 24 bits unused and 8 bits green channel.
A two-component, 16-bit typeless format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit unsigned-normalized-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit unsigned-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit signed-normalized-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit signed-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A single-component, 16-bit typeless format that supports 16 bits for the red channel.
A single-component, 16-bit floating-point format that supports 16 bits for the red channel.5,7
A single-component, 16-bit unsigned-normalized-integer format that supports 16 bits for depth.
A single-component, 16-bit unsigned-normalized-integer format that supports 16 bits for the red channel.
A single-component, 16-bit unsigned-integer format that supports 16 bits for the red channel.
A single-component, 16-bit signed-normalized-integer format that supports 16 bits for the red channel.
A single-component, 16-bit signed-integer format that supports 16 bits for the red channel.
A single-component, 8-bit typeless format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-normalized-integer format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-integer format that supports 8 bits for the red channel.
A single-component, 8-bit signed-normalized-integer format that supports 8 bits for the red channel.
A single-component, 8-bit signed-integer format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-normalized-integer format for alpha only.
A single-component, 1-bit unsigned-normalized integer format that supports 1 bit for the red channel. ?.
Three partial-precision floating-point numbers encoded into a single 32-bit value all sharing the same 5-bit exponent (variant of s10e5, which is sign bit, 10-bit mantissa, and 5-bit biased (15) exponent). There is no sign bit, and there is a shared 5-bit biased (15) exponent and a 9-bit mantissa for each channel, as shown in the following illustration. 2,6,7.
A four-component, 32-bit unsigned-normalized-integer format. This packed RGB format is analogous to the UYVY format. Each 32-bit block describes a pair of pixels: (R8, G8, B8) and (R8, G8, B8) where the R8/B8 values are repeated, and the G8 values are unique to each pixel. ?
Width must be even.
A four-component, 32-bit unsigned-normalized-integer format. This packed RGB format is analogous to the YUY2 format. Each 32-bit block describes a pair of pixels: (R8, G8, B8) and (R8, G8, B8) where the R8/B8 values are repeated, and the G8 values are unique to each pixel. ?
Width must be even.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A three-component, 16-bit unsigned-normalized-integer format that supports 5 bits for blue, 6 bits for green, and 5 bits for red.
Direct3D 10 through Direct3D 11:??This value is defined for DXGI. However, Direct3D 10, 10.1, or 11 devices do not support this format.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 16-bit unsigned-normalized-integer format that supports 5 bits for each color channel and 1-bit alpha.
Direct3D 10 through Direct3D 11:??This value is defined for DXGI. However, Direct3D 10, 10.1, or 11 devices do not support this format.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits for each color channel and 8-bit alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits for each color channel and 8 bits unused.
A four-component, 32-bit 2.8-biased fixed-point format that supports 10 bits for each color channel and 2-bit alpha.
A four-component, 32-bit typeless format that supports 8 bits for each channel including alpha. ?
A four-component, 32-bit unsigned-normalized standard RGB format that supports 8 bits for each channel including alpha. ?
A four-component, 32-bit typeless format that supports 8 bits for each color channel, and 8 bits are unused. ?
A four-component, 32-bit unsigned-normalized standard RGB format that supports 8 bits for each color channel, and 8 bits are unused. ?
A typeless block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.?
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.?
A typeless block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Most common YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel packed YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel packed YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
Most common YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel planar YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel planar YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit per channel planar YUV 4:2:0 video resource format. This format is subsampled where each pixel has its own Y value, but each 2x2 pixel block shares a single U and V value. The runtime requires that the width and height of all resources that are created with this format are multiples of 2. The runtime also requires that the left, right, top, and bottom members of any
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
Most common YUV 4:2:2 video resource format. Valid view formats for this video resource format are
A unique valid view format for this video resource format is
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel packed YUV 4:2:2 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel packed YUV 4:2:2 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
Most common planar YUV 4:1:1 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be a multiple of 4. Direct3D11 staging resources and initData parameters for this format use (rowPitch * height * 2) bytes. The first (SysMemPitch * height) bytes are the Y plane, the next ((SysMemPitch / 2) * height) bytes are the UV plane, and the remainder is padding.
Direct3D 11.1:??This value is not supported until Windows?8.
4-bit palletized YUV format that is commonly used for DVD subpicture.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
4-bit palletized YUV format that is commonly used for DVD subpicture.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit palletized format that is used for palletized RGB data when the processor processes ISDB-T data and for palletized YUV data when the processor processes BluRay data.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit palletized format with 8 bits of alpha that is used for palletized YUV data when the processor processes BluRay data.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 16-bit unsigned-normalized integer format that supports 4 bits for each channel including alpha.
Direct3D 11.1:??This value is not supported until Windows?8.
A video format; an 8-bit version of a hybrid planar 4:2:2 format.
An 8 bit YCbCrA 4:4 rendering format.
An 8 bit YCbCrA 4:4:4:4 rendering format.
Indicates options for presenting frames to the swap chain.
-This enum is used by the
Specifies that the presentation mode is a composition surface, meaning that the conversion from YUV to RGB is happening once per output refresh (for example, 60 Hz). When this value is returned, the media app should discontinue use of the decode swap chain and perform YUV to RGB conversion itself, reducing the frequency of YUV to RGB conversion to once per video frame.
Specifies that the presentation mode is an overlay surface, meaning that the YUV to RGB conversion is happening efficiently in hardware (once per video frame). When this value is returned, the media app can continue to use the decode swap chain. See
No presentation is specified.
An issue occurred that caused content protection to be invalidated in a swap-chain with hardware content protection, and is usually because the system ran out of hardware protected memory. The app will need to do one of the following:
Note that simply re-creating the swap chain or the device will usually have no impact as the DWM will continue to run out of memory and will return the same failure.
Identifies the granularity at which the graphics processing unit (GPU) can be preempted from performing its current graphics rendering task.
-You call the
The following figure shows granularity of graphics rendering tasks.
-Indicates the preemption granularity as a DMA buffer.
Indicates the preemption granularity as a graphics primitive. A primitive is a section in a DMA buffer and can be a group of triangles.
Indicates the preemption granularity as a triangle. A triangle is a part of a primitive.
Indicates the preemption granularity as a pixel. A pixel is a part of a triangle.
Indicates the preemption granularity as a graphics instruction. A graphics instruction operates on a pixel.
Specifies the header metadata type.
-This enum is used by the SetHDRMetaData method.
-Indicates there is no header metadata.
Indicates the header metadata is held by a
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-Use
A reference to the surface data (see
CPU read-write flags. These flags can be combined with a logical OR.
Specifies the memory segment group to use.
-This enum is used by QueryVideoMemoryInfo and SetVideoMemoryReservation.
Refer to the remarks for
The grouping of segments which is considered local to the video adapter, and represents the fastest available memory to the GPU. Applications should target the local segment group as the target size for their working set.
The grouping of segments which is considered non-local to the video adapter, and may have slower performance than the local segment group.
Options for swap-chain color space.
-This enum is used by SetColorSpace.
-Specifies nominal range YCbCr, which isn't an absolute color space, but a way of encoding RGB info.
Specifies BT.709, which standardizes the format of high-definition television and has 16:9 (widescreen) aspect ratio.
Specifies xvYCC or extended-gamut YCC (also x.v.Color) color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space.
Specifies flags for the OfferResources1 method.
- Identifies the importance of a resource?s content when you call the
Priority determines how likely the operating system is to discard an offered resource. Resources offered with lower priority are discarded first.
-Identifies the type of reference shape.
-The reference type is a monochrome mouse reference, which is a monochrome bitmap. The bitmap's size is specified by width and height in a 1 bits per pixel (bpp) device independent bitmap (DIB) format AND mask that is followed by another 1 bpp DIB format XOR mask of the same size.
The reference type is a color mouse reference, which is a color bitmap. The bitmap's size is specified by width and height in a 32 bpp ARGB DIB format.
The reference type is a masked color mouse reference. A masked color mouse reference is a 32 bpp ARGB format bitmap with the mask value in the alpha bits. The only allowed mask values are 0 and 0xFF. When the mask value is 0, the RGB value should replace the screen pixel. When the mask value is 0xFF, an XOR operation is performed on the RGB value and the screen pixel; the result replaces the screen pixel.
Specifies support for overlay color space.
-Overlay color space support is present.
Specifies overlay support to check for in a call to
Presents a rendered image to the user.
-Starting with Direct3D 11.1, consider using
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank. -
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
Specifies result flags for the ReclaimResources1 method.
-Flags indicating the memory location of a resource.
-This enum is used by QueryResourceResidency.
-The resource is located in video memory.
At least some of the resource is located in CPU memory.
At least some of the resource has been paged out to the hard drive.
Set the priority for evicting the resource from memory.
-The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-The priority is one of the following values:
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Identifies resize behavior when the back-buffer size does not match the size of the target output.
-The
float aspectRatio = backBufferWidth / float(backBufferHeight); // Horizontal fill float scaledWidth = outputWidth; float scaledHeight = outputWidth / aspectRatio; if (scaledHeight >= outputHeight) { // Do vertical fill scaledWidth = outputHeight * aspectRatio; scaledHeight = outputHeight; } float offsetX = (outputWidth - scaledWidth) * 0.5f; float offsetY = (outputHeight - scaledHeight) * 0.5f; rect.left = static_cast<LONG>(offsetX); rect.top = static_cast<LONG>(offsetY); rect.right = static_cast<LONG>(offsetX + scaledWidth); rect.bottom = static_cast<LONG>(offsetY + scaledHeight); rect.left = std::max<LONG>(0, rect.left); rect.top = std::max<LONG>(0, rect.top); rect.right = std::min<LONG>(static_cast<LONG>(outputWidth), rect.right); rect.bottom = std::min<LONG>(static_cast<LONG>(outputHeight), rect.bottom);
-
Note that outputWidth and outputHeight are the pixel sizes of the presentation target size. In the case of CoreWindow, this requires converting the logicalWidth and logicalHeight values from DIPS to pixels using the window's DPI property.
-Directs DXGI to make the back-buffer contents scale to fit the presentation target size. This is the implicit behavior of DXGI when you call the
Directs DXGI to make the back-buffer contents appear without any scaling when the presentation target size is not equal to the back-buffer size. The top edges of the back buffer and presentation target are aligned together. If the WS_EX_LAYOUTRTL style is associated with the
This value specifies that all target areas outside the back buffer of a swap chain are filled with the background color that you specify in a call to
Directs DXGI to make the back-buffer contents scale to fit the presentation target size, while preserving the aspect ratio of the back-buffer. If the scaled back-buffer does not fill the presentation area, it will be centered with black borders.
This constant is supported on Windows Phone 8 and Windows 10.
Note that with legacy Win32 window swapchains, this works the same as
Specifies color space support for the swap chain.
-Color space support is present.
Overlay color space support is present.
Options for swap-chain behavior.
-This enumeration is used by the
This enumeration is also used by the
You don't need to set
Swap chains that you create with the
When you call
Set this flag to turn off automatic image rotation; that is, do not perform a rotation when transferring the contents of the front buffer to the monitor. Use this flag to avoid a bandwidth penalty when an application expects to handle rotation. This option is valid only during full-screen mode.
Set this flag to enable an application to switch modes by calling
Set this flag to enable an application to render using GDI on a swap chain or a surface. This will allow the application to call
Set this flag to indicate that the swap chain might contain protected content; therefore, the operating system supports the creation of the swap chain only when driver and hardware protection is used. If the driver and hardware do not support content protection, the call to create a resource for the swap chain fails.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to indicate that shared resources that are created within the swap chain must be protected by using the driver?s mechanism for restricting access to shared surfaces.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to restrict presented content to the local displays. Therefore, the presented content is not accessible via remote accessing or through the desktop duplication APIs.
This flag supports the window content protection features of Windows. Applications can use this flag to protect their own onscreen window content from being captured or copied through a specific set of public operating system features and APIs.
If you use this flag with windowed (
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to create a waitable object you can use to ensure rendering does not begin while a frame is still being presented. When this flag is used, the swapchain's latency must be set with the
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain in the foreground layer for multi-plane rendering. This flag can only be used with CoreWindow swap chains, which are created with CreateSwapChainForCoreWindow. Apps should not create foreground swap chains if
Note that
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain for full-screen video.
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain for YUV video.
Note??This enumeration value is supported starting with Windows?8.1.
Indicates that the swap chain should be created such that all underlying resources can be protected by the hardware. Resource creation will fail if hardware content protection is not supported.
This flag has the following restrictions:
Note??This enumeration value is supported starting with Windows?10.
Tearing support is a requirement to enable displays that support variable refresh rates to function properly when the application presents a swap chain tied to a full screen borderless window. Win32 apps can already achieve tearing in fullscreen exclusive mode by calling SetFullscreenState(TRUE), but the recommended approach for Win32 developers is to use this tearing flag instead.
To check for hardware support of this feature, refer to
Options for handling pixels in a display surface after calling
This enumeration is used by the
To use multisampling with
The primary difference between presentation models is how back-buffer contents get to the Desktop Window Manager (DWM) for composition. In the bitblt model, which is used with the
When you call
Regardless of whether the flip model is more efficient, an application still might choose the bitblt model because the bitblt model is the only way to mix GDI and DirectX presentation. In the flip model, the application must create the swap chain with
For more info about the flip-model swap chain and optimizing presentation, see Enhancing presentation with the flip model, dirty rectangles, and scrolled areas.
-Creates a DXGI 1.1 factory that you can use to generate other DXGI objects.
-The globally unique identifier (
Address of a reference to an
Returns
Use a DXGI 1.1 factory to generate objects that enumerate adapters, create swap chains, and associate a window with the alt+enter key sequence for toggling to and from the full-screen display mode.
If the CreateDXGIFactory1 function succeeds, the reference count on the
This entry point is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Note??Do not mix the use of DXGI 1.0 (Creates a DXGI 1.3 factory that you can use to generate other DXGI objects.
In Windows?8, any DXGI factory created while DXGIDebug.dll was present on the system would load and use it. Starting in Windows?8.1, apps explicitly request that DXGIDebug.dll be loaded instead. Use CreateDXGIFactory2 and specify the
Valid values include the
The globally unique identifier (
Address of a reference to an
Returns
This function accepts a flag indicating whether DXGIDebug.dll is loaded. The function otherwise behaves identically to CreateDXGIFactory1.
- The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
A display sub-system is often referred to as a video card, however, on some machines the display sub-system is part of the mother board.
To enumerate the display sub-systems, use
Windows?Phone?8: This API is supported.
-Gets a DXGI 1.1 description of an adapter (or video card).
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the GetDesc1 method to get a DXGI 1.1 description of an adapter. To get a DXGI 1.0 description, use the
Gets a DXGI 1.1 description of an adapter (or video card).
-A reference to a
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the GetDesc1 method to get a DXGI 1.1 description of an adapter. To get a DXGI 1.0 description, use the
The
A display subsystem is often referred to as a video card; however, on some computers, the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Gets a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 description of an adapter or video card. This description includes information about the granularity at which the graphics processing unit (GPU) can be preempted from performing its current task.
-Use the GetDesc2 method to get a DXGI 1.2 description of an adapter. To get a DXGI 1.1 description, use the
The Windows Display Driver Model (WDDM) scheduler can preempt the GPU's execution of application tasks. The granularity at which the GPU can be preempted from performing its current task in the WDDM 1.1 or earlier driver model is a direct memory access (DMA) buffer for graphics tasks or a compute packet for compute tasks. The GPU can switch between tasks only after it completes the currently executing unit of work, a DMA buffer or a compute packet.
A DMA buffer is the largest independent unit of graphics work that the WDDM scheduler can submit to the GPU. This buffer contains a set of GPU instructions that the WDDM driver and GPU use. A compute packet is the largest independent unit of compute work that the WDDM scheduler can submit to the GPU. A compute packet contains dispatches (for example, calls to the
Gets a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 description of an adapter or video card. This description includes information about the granularity at which the graphics processing unit (GPU) can be preempted from performing its current task.
-A reference to a
Returns
Use the GetDesc2 method to get a DXGI 1.2 description of an adapter. To get a DXGI 1.1 description, use the
The Windows Display Driver Model (WDDM) scheduler can preempt the GPU's execution of application tasks. The granularity at which the GPU can be preempted from performing its current task in the WDDM 1.1 or earlier driver model is a direct memory access (DMA) buffer for graphics tasks or a compute packet for compute tasks. The GPU can switch between tasks only after it completes the currently executing unit of work, a DMA buffer or a compute packet.
A DMA buffer is the largest independent unit of graphics work that the WDDM scheduler can submit to the GPU. This buffer contains a set of GPU instructions that the WDDM driver and GPU use. A compute packet is the largest independent unit of compute work that the WDDM scheduler can submit to the GPU. A compute packet contains dispatches (for example, calls to the
This interface adds some memory residency methods, for budgeting and reserving physical memory.
-For more details, refer to the Residency section of the D3D12 documentation.
-Registers to receive notification of hardware content protection teardown events.
-A handle to the event object that the operating system sets when hardware content protection teardown occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
Call
Unregisters an event to stop it from receiving notification of hardware content protection teardown events.
-A key value for the window or event to unregister. The
This method informs the process of the current budget and process usage.
-Specifies the device's physical adapter for which the video memory information is queried. For single-GPU operation, set this to zero. If there are multiple GPU nodes, set this to the index of the node (the device's physical adapter) for which the video memory information is queried. See Multi-Adapter.
Specifies a
Fills in a
Applications must explicitly manage their usage of physical memory explicitly and keep usage within the budget assigned to the application process. Processes that cannot kept their usage within their assigned budgets will likely experience stuttering, as they are intermittently frozen and paged-out to allow other processes to run.
-This method sends the minimum required physical memory for an application, to the OS.
-Specifies the device's physical adapter for which the video memory information is being set. For single-GPU operation, set this to zero. If there are multiple GPU nodes, set this to the index of the node (the device's physical adapter) for which the video memory information is being set. See Multi-Adapter.
Specifies a
Specifies a UINT64 that sets the minimum required physical memory, in bytes.
Returns
Applications are encouraged to set a video reservation to denote the amount of physical memory they cannot go without. This value helps the OS quickly minimize the impact of large memory pressure situations.
-This method establishes a correlation between a CPU synchronization object and the budget change event.
-Specifies a HANDLE for the event.
A key value for the window or event to unregister. The
Instead of calling QueryVideoMemoryInfo regularly, applications can use CPU synchronization objects to efficiently wake threads when budget changes occur.
-This method stops notifying a CPU synchronization object whenever a budget change occurs. An application may switch back to polling the information regularly.
-A key value for the window or event to unregister. The
An application may switch back to polling for the information regularly.
- The
A display subsystem is often referred to as a video card, however, on some machines the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Windows?Phone?8: This API is supported.
-Represents a swap chain that is used by desktop media apps to decode video data and show it on a DirectComposition surface.
-Decode swap chains are intended for use primarily with YUV surface formats. When using decode buffers created with an RGB surface format, the TargetRect and DestSize must be set equal to the buffer dimensions. SourceRect cannot exceed the buffer dimensions.
In clone mode, the decode swap chain is only guaranteed to be shown on the primary output.
Decode swap chains cannot be used with dirty rects.
-Gets or sets the source region that is used for the swap chain.
-Gets or sets the rectangle that defines the target region for the video processing blit operation.
-Gets or sets the color space used by the swap chain.
-Presents a frame on the output adapter. The frame is a subresource of the
This method returns
Sets the rectangle that defines the source region for the video processing blit operation.
The source rectangle is the portion of the input surface that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface.
-A reference to a
This method returns
Sets the rectangle that defines the target region for the video processing blit operation.
The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface.
-A reference to a
This method returns
Sets the size of the destination surface to use for the video processing blit operation.
The destination rectangle is the portion of the output surface that receives the blit for this stream. The destination rectangle is given in pixel coordinates, relative to the output surface.
-The width of the destination size, in pixels.
The height of the destination size, in pixels.
This method returns
Gets the source region that is used for the swap chain.
-A reference to a
This method returns
Gets the rectangle that defines the target region for the video processing blit operation.
-A reference to a
This method returns
Gets the size of the destination surface to use for the video processing blit operation.
-A reference to a variable that receives the width in pixels.
A reference to a variable that receives the height in pixels.
This method returns
Sets the color space used by the swap chain.
-A reference to a combination of
This method returns
Gets the color space used by the swap chain.
-A combination of
An
This interface is not supported by Direct3D 12 devices. Direct3D 12 applications have direct control over their swapchain management, so better latency control should be handled by the application. You can make use of Waitable objects (refer to
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Gets or sets the number of frames that the system is allowed to queue for rendering.
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
-Sets the number of frames that the system is allowed to queue for rendering.
-The maximum number of back buffer frames that a driver can queue. The value defaults to 3, but can range from 1 to 16. A value of 0 will reset latency to the default. For multi-head devices, this value is specified per-head.
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
-Gets the number of frames that the system is allowed to queue for rendering.
-This value is set to the number of frames that can be queued for render. This value defaults to 3, but can range from 1 to 16.
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
- The
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Flushes any outstanding rendering commands and sets the specified event object to the signaled state after all previously submitted rendering commands complete.
-A handle to the event object. The CreateEvent or OpenEvent function returns this handle. All types of event objects (manual-reset, auto-reset, and so on) are supported.
The handle must have the EVENT_MODIFY_STATE access right. For more information about access rights, see Synchronization Object Security and Access Rights.
Returns
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, EnqueueSetEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
EnqueueSetEvent calls the SetEvent function on the event object after all previously submitted rendering commands complete or the device is removed.
After an application calls EnqueueSetEvent, it can immediately call the WaitForSingleObject function to put itself to sleep until rendering commands complete.
You cannot use EnqueueSetEvent to determine work completion that is associated with presentation (
The
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice);
Windows?Phone?8: This API is supported.
-Trims the graphics memory allocated by the
For apps that render with DirectX, graphics drivers periodically allocate internal memory buffers in order to speed up subsequent rendering requests. These memory allocations count against the app's memory usage for PLM and in general lead to increased memory usage by the overall system.
Starting in Windows?8.1, apps that render with Direct2D and/or Direct3D (including CoreWindow and XAML interop) must call Trim in response to the PLM suspend callback. The Direct3D runtime and the graphics driver will discard internal memory buffers allocated for the app, reducing its memory footprint.
Calling this method does not change the rendering state of the graphics device and it has no effect on rendering operations. There is a brief performance hit when internal buffers are reallocated during the first rendering operations after the Trim call, therefore apps should only call Trim when going idle for a period of time (in response to PLM suspend, for example).
Apps should ensure that they call Trim as one of the last D3D operations done before going idle. Direct3D will normally defer the destruction of D3D objects. Calling Trim, however, forces Direct3D to destroy objects immediately. For this reason, it is not guaranteed that releasing the final reference on Direct3D objects after calling Trim will cause the object to be destroyed and memory to be deallocated before the app suspends.
Similar to
It is also prudent to release references on middleware before calling Trim, as that middleware may also need to release references - to Direct3D objects.
- An
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
The
We recommend that you not use
Call QueryInterface from a factory object (
* pDXGIDisplayControl; - hr = g_pDXGIFactory->QueryInterface(__uuidof( ), (void **)&pDXGIDisplayControl);
The operating system processes changes to stereo-enabled configuration asynchronously. Therefore, these changes might not be immediately visible in every process that calls
Platform Update for Windows?7:?? Stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Retrieves a Boolean value that indicates whether the operating system's stereoscopic 3D display behavior is enabled.
-You pass a Boolean value to the
Set a Boolean value to either enable or disable the operating system's stereoscopic 3D display behavior.
-Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetStereoEnabled doesn't change stereoscopic 3D display behavior because stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Retrieves a Boolean value that indicates whether the operating system's stereoscopic 3D display behavior is enabled.
-IsStereoEnabled returns TRUE when the operating system's stereoscopic 3D display behavior is enabled and
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsStereoEnabled always returns
You pass a Boolean value to the
Set a Boolean value to either enable or disable the operating system's stereoscopic 3D display behavior.
-A Boolean value that either enables or disables the operating system's stereoscopic 3D display behavior. TRUE enables the operating system's stereoscopic 3D display behavior and
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetStereoEnabled doesn't change stereoscopic 3D display behavior because stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Enables creating Microsoft DirectX Graphics Infrastructure (DXGI) objects.
-Gets the flags that were used when a Microsoft DirectX Graphics Infrastructure (DXGI) object was created.
-The GetCreationFlags method returns flags that were passed to the CreateDXGIFactory2 function, or were implicitly constructed by CreateDXGIFactory, CreateDXGIFactory1,
Gets the flags that were used when a Microsoft DirectX Graphics Infrastructure (DXGI) object was created.
-The creation flags.
The GetCreationFlags method returns flags that were passed to the CreateDXGIFactory2 function, or were implicitly constructed by CreateDXGIFactory, CreateDXGIFactory1,
This interface enables a single method to support variable refresh rate displays.
-Used to check for hardware feature support.
-Specifies one member of
Specifies a reference to a buffer that will be filled with data that describes the feature support.
The size, in bytes, of pFeatureSupportData.
This method returns an
Refer to the description of
Creates swap chains for desktop media apps that use DirectComposition surfaces to decode and display video.
- To create a Microsoft DirectX Graphics Infrastructure (DXGI) media factory interface, pass
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain. You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory);
Creates a YUV swap chain for an existing DirectComposition surface handle.
-CreateSwapChainForCompositionSurfaceHandle returns:
Creates a YUV swap chain for an existing DirectComposition surface handle. The swap chain is created with pre-existing buffers and very few descriptive elements are required. Instead, this method requires a DirectComposition surface handle and an
CreateDecodeSwapChainForCompositionSurfaceHandle returns:
The
Enables performing bulk operations across all SurfaceImageSource objects created in the same process.
-Flushes all current GPU work for all SurfaceImageSource or VirtualSurfaceImageSource objects associated with the given device.
-If this method succeeds, it returns
The FlushAllSurfacesWithDevice method flushes current GPU work for all SurfaceImageSource objects that were created with device. This GPU work includes Direct2D rendering work and internal GPU work done by the framework associated with rendering. This is useful if an application has created multiple SurfaceImageSource objects and needs to flush the GPU work for all of these surfaces from the background rendering thread. By flushing this work from the background thread the work can be better parallelized, with work being done on the UI thread to improve performance.
You can call the FlushAllSurfacesWithDevice method from a non-UI thread.
-Provides the implementation of a shared fixed-size surface for Direct2D drawing.
Note??If the surface is larger than the screen size, useThis interface provides the native implementation of the SurfaceImageSource Windows runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_sisNative; - // ... - IInspectable* sisInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(surfaceImageSource); - sisInspectable->QueryInterface(__uuidof( ), (void **)&m_sisNative)
Sets the DXGI device, created with
Sets the DXGI device, created with
Pointer to the DXGI device interface.
If this method succeeds, it returns
Opens the supplied DXGI surface for drawing.
-The region of the surface that will be drawn into.
Receives the point (x,y) offset of the surface that will be drawn into.
Receives a reference to the surface for drawing.
If the app window that contains the SurfaceImageSource isn't active, like when it's suspended, calling the BeginDraw method returns an error.
-Closes the surface draw operation.
-If this method succeeds, it returns
Provides the implementation of a shared Microsoft DirectX surface which is displayed in a SurfaceImageSource or VirtualSurfaceImageSource.
-The
Microsoft::WRL::ComPtr<> m_sisD2DNative; - // ... - IInspectable* sisInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(surfaceImageSource); - sisInspectable->QueryInterface(__uuidof( ), (void **)&m_sisD2DNative)
The
The
Only call the SetDevice, BeginDraw, and EndDraw methods on
In order to support batching updates to multiple surfaces to improve performance, you can pass an
To draw to the surface from a background thread, you must set any DirectX resources, including the Microsoft Direct3D device, Direct3D device context, Direct2D device, and Direct2D device context, to enable multithreading support.
You can call the BeginDraw, SuspendDraw, and ResumeDraw methods from any background thread to enable high-performance multithreaded drawing.
Always call the EndDraw method on the UI thread in order to synchronize updating the DirectX content with the current XAML UI thread frame. You can call BeginDraw on a background thread, call SuspendDraw when you're done drawing on the background thread, and call EndDraw on the UI thread.
Use SuspendDraw and ResumeDraw to suspend and resume drawing on any background or UI thread.
Handle the SurfaceContentsLost event to determine when you need to recreate content which may be lost if the system resets the GPU.
-Sets the Microsoft DirectX Graphics Infrastructure (DXGI) or Direct2D device, created with
Sets the Microsoft DirectX Graphics Infrastructure (DXGI) or Direct2D device, created with
Pointer to the DXGI device interface. You can pass an
This method fails when the SurfaceImageSource is larger than the maximum texture size supported by the Direct3D device. Apps should use VirtualSurfaceImageSource for surfaces larger than the maximum texture size supported by the Direct3D device.
Initiates an update to the associated SurfaceImageSource or VirtualSurfaceImageSource.
-If this method succeeds, it returns
Closes the surface draw operation.
-If this method succeeds, it returns
Always call the EndDraw method on the UI thread in order to synchronize updating the Microsoft DirectX content with the current XAML UI thread frame.
-Suspends the drawing operation.
-If this method succeeds, it returns
Resume the drawing operation.
-If this method succeeds, it returns
Sets the DirectX swap chain for SwapChainBackgroundPanel.
-Sets the DirectX swap chain for SwapChainBackgroundPanel.
-If this method succeeds, it returns
Provides interoperation between XAML and a DirectX swap chain. Unlike SwapChainBackgroundPanel, a SwapChainPanel can appear at any level in the XAML display tree, and more than 1 can be present in any given tree.
-This interface provides the native implementation of the Windows::UI::XAML::Control::SwapChainPanel Windows Runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_swapChainNative; - // ... - IInspectable* panelInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(swapChainPanel); - panelInspectable->QueryInterface(__uuidof( ), (void **)&m_swapChainNative);
Sets the DirectX swap chain for SwapChainPanel.
-Sets the DirectX swap chain for SwapChainPanel.
-If this method succeeds, it returns
Provides interoperation between XAML and a DirectX swap chain. Unlike SwapChainBackgroundPanel, a SwapChainPanel can appear at any level in the XAML display tree, and more than 1 can be present in any given tree.
-This interface provides the native implementation of the Windows::UI::XAML::Control::SwapChainPanel Windows Runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_swapChainNative2; - // ... - IInspectable* panelInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(swapChainPanel); - panelInspectable->QueryInterface(__uuidof( ), (void **)&m_swapChainNative2);
Sets the DirectX swap chain for SwapChainPanel using a handle to the swap chain.
-SetSwapChain(HANDLE swapChainHandle) allows a swap chain to be rendered by referencing a shared handle to the swap chain. This enables scenarios where a swap chain is created in one process and needs to be passed to another process.
XAML supports setting a DXGI swap chain as the content of a SwapChainPanel element. Apps accomplish this by querying for the
This process works for references to in process swap chains. However, this doesn?t work for VoIP apps, which use a two-process model to enable continuing calls on a background process when a foreground process is suspended or shut down. This two-process implementation requires the ability to pass a shared handle to a swap chain, rather than a reference, created on the background process to the foreground process to be rendered in a XAML SwapChainPanel in the foreground app.
<!-- XAML markup --> - <Page> <SwapChainPanel x:Name=?captureStreamDisplayPanel? /> - </Page> // Definitions - ComPtr<-> m_swapChain; - HANDLE m_swapChainHandle; - ComPtr< > m_d3dDevice; - ComPtr< > dxgiAdapter; - ComPtr< > dxgiFactory; - ComPtr< > dxgiFactoryMedia; - ComPtr< > dxgiDevice; - swapChainDesc = {0}; // Get DXGI factory (assume standard boilerplate has created D3D11Device) - m_d3dDevice.As(&dxgiDevice); - dxgiDevice->GetAdapter(&dxgiAdapter); - dxgiAdapter->GetParent(__uuidof( ), &dxgiFactory); // Create swap chain and get handle - (GENERIC_ALL, nullptr, &m_swapChainHandle); - dxgiFactory.As(&dxgiFactoryMedia); - dxgiFactoryMedia->CreateSwapChainForCompositionSurfaceHandle( m_d3dDevice.Get(), m_swapChainHandle, &swapChainDesc, nullptr, &m_swapChain - ); // Set swap chain to display in a SwapChainPanel - ComPtr< > panelNative; - reinterpret_cast< *>(captureStreamDisplayPanel)->QueryInterface(IID_PPV_ARGS(&panelNative))); - panelNative->SetSwapChainHandle(m_swapChainHandle);
Sets the DirectX swap chain for SwapChainPanel using a handle to the swap chain.
-If this method succeeds, it returns
SetSwapChain(HANDLE swapChainHandle) allows a swap chain to be rendered by referencing a shared handle to the swap chain. This enables scenarios where a swap chain is created in one process and needs to be passed to another process.
XAML supports setting a DXGI swap chain as the content of a SwapChainPanel element. Apps accomplish this by querying for the
This process works for references to in process swap chains. However, this doesn?t work for VoIP apps, which use a two-process model to enable continuing calls on a background process when a foreground process is suspended or shut down. This two-process implementation requires the ability to pass a shared handle to a swap chain, rather than a reference, created on the background process to the foreground process to be rendered in a XAML SwapChainPanel in the foreground app.
<!-- XAML markup --> - <Page> <SwapChainPanel x:Name=?captureStreamDisplayPanel? /> - </Page> // Definitions - ComPtr<-> m_swapChain; - HANDLE m_swapChainHandle; - ComPtr< > m_d3dDevice; - ComPtr< > dxgiAdapter; - ComPtr< > dxgiFactory; - ComPtr< > dxgiFactoryMedia; - ComPtr< > dxgiDevice; - swapChainDesc = {0}; // Get DXGI factory (assume standard boilerplate has created D3D11Device) - m_d3dDevice.As(&dxgiDevice); - dxgiDevice->GetAdapter(&dxgiAdapter); - dxgiAdapter->GetParent(__uuidof( ), &dxgiFactory); // Create swap chain and get handle - (GENERIC_ALL, nullptr, &m_swapChainHandle); - dxgiFactory.As(&dxgiFactoryMedia); - dxgiFactoryMedia->CreateSwapChainForCompositionSurfaceHandle( m_d3dDevice.Get(), m_swapChainHandle, &swapChainDesc, nullptr, &m_swapChain - ); // Set swap chain to display in a SwapChainPanel - ComPtr< > panelNative; - reinterpret_cast< *>(captureStreamDisplayPanel)->QueryInterface(IID_PPV_ARGS(&panelNative))); - panelNative->SetSwapChainHandle(m_swapChainHandle);
Provides an interface for the implementation of drawing behaviors when a VirtualSurfaceImageSource requests an update.
-This interface is implemented by the developer to provide specific drawing behaviors for updates to a VirtualSurfaceImageSource. Classes that implement this interface are provided to the
Gets the boundaries of the visible region of the shared surface.
-Invalidates a specific region of the shared surface for drawing.
-The region of the surface to invalidate.
If this method succeeds, it returns
Gets the total number of regions of the surface that must be updated.
-Receives the number of regions to update.
Gets the set of regions that must be updated on the shared surface.
-The number of regions that must be updated. You obtain this by calling GetUpdateRectCount.
Receives a list of regions that must be updated.
If this method succeeds, it returns
Gets the boundaries of the visible region of the shared surface.
-Receives a rectangle that specifies the visible region of the shared surface.
If this method succeeds, it returns
Registers for the callback that will perform the drawing when an update to the shared surface is requested.
-Pointer to an implementation of
If this method succeeds, it returns
Resizes the surface.
-The updated width of the surface.
The updated height of the surface.
If this method succeeds, it returns
Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-This method is implemented by the developer.
-Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-This method is implemented by the developer.
-Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-If this method succeeds, it returns
This method is implemented by the developer.
-Represents a keyed mutex, which allows exclusive access to a shared resource that is used by multiple devices.
-The
An
For information about creating a keyed mutex, see the
Using a key, acquires exclusive rendering access to a shared resource.
-A value that indicates which device to give access to. This method will succeed when the device that currently owns the surface calls the
The time-out interval, in milliseconds. This method will return if the interval elapses, and the keyed mutex has not been released using the specified Key. If this value is set to zero, the AcquireSync method will test to see if the keyed mutex has been released and returns immediately. If this value is set to INFINITE, the time-out interval will never elapse.
Return
If the owning device attempted to create another keyed mutex on the same shared resource, AcquireSync returns E_FAIL.
AcquireSync can also return the following DWORD constants. Therefore, you should explicitly check for these constants. If you only use the SUCCEEDED macro on the return value to determine if AcquireSync succeeded, you will not catch these constants.
The AcquireSync method creates a lock to a surface that is shared between multiple devices, allowing only one device to render to a surface at a time. This method uses a key to determine which device currently has exclusive access to the surface.
When a surface is created using the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX value of the D3D10_RESOURCE_MISC_FLAG enumeration, you must call the AcquireSync method before rendering to the surface. You must call the ReleaseSync method when you are done rendering to a surface.
To acquire a reference to the keyed mutex object of a shared resource, call the QueryInterface method of the resource and pass in the UUID of the
The AcquireSync method uses the key as follows, depending on the state of the surface:
Using a key, releases exclusive rendering access to a shared resource.
-A value that indicates which device to give access to. This method succeeds when the device that currently owns the surface calls the ReleaseSync method using the same value. This value can be any UINT64 value.
Returns
If the device attempted to release a keyed mutex that is not valid or owned by the device, ReleaseSync returns E_FAIL.
The ReleaseSync method releases a lock to a surface that is shared between multiple devices. This method uses a key to determine which device currently has exclusive access to the surface.
When a surface is created using the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX value of the D3D10_RESOURCE_MISC_FLAG enumeration, you must call the
After you call the ReleaseSync method, the shared resource is unset from the rendering pipeline.
To acquire a reference to the keyed mutex object of a shared resource, call the QueryInterface method of the resource and pass in the UUID of the
An
To see the outputs available, use
Get a description of the output.
-On a high DPI desktop, GetDesc returns the visualized screen size unless the app is marked high DPI aware. For info about writing DPI-aware Win32 apps, see High DPI.
-Gets a description of the gamma-control capabilities.
-Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets or sets the gamma control settings.
-Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets statistics about recently rendered frames.
-This API is similar to
Note??Calling this method is only supported while in full-screen mode.? -
Get a description of the output.
-A reference to the output description (see
Returns a code that indicates success or failure.
On a high DPI desktop, GetDesc returns the visualized screen size unless the app is marked high DPI aware. For info about writing DPI-aware Win32 apps, see High DPI.
-[Starting with Direct3D 11.1, we recommend not to use GetDisplayModeList anymore to retrieve the matching display mode. Instead, use
Gets the display modes that match the requested format and other input options.
-Returns one of the following DXGI_ERROR. It is rare, but possible, that the display modes available can change immediately after calling this method, in which case
In general, when switching from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth and refresh rate of the swap chain. To exercise more control over the display mode, use this API to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
As shown, this API is designed to be called twice. First to get the number of modes available, and second to return a description of the modes.
UINT num = 0; --format = ; - UINT flags = ; pOutput->GetDisplayModeList( format, flags, &num, 0); ... * pDescs = new [num]; - pOutput->GetDisplayModeList( format, flags, &num, pDescs);
[Starting with Direct3D 11.1, we recommend not to use FindClosestMatchingMode anymore to find the display mode that most closely matches the requested display mode. Instead, use
Finds the display mode that most closely matches the requested display mode.
-Returns one of the following DXGI_ERROR.
FindClosestMatchingMode behaves similarly to the
Halt a thread until the next vertical blank occurs.
-Returns one of the following DXGI_ERROR.
A vertical blank occurs when the raster moves from the lower right corner to the upper left corner to begin drawing the next frame.
-Takes ownership of an output.
-A reference to the
Set to TRUE to enable other threads or applications to take ownership of the device; otherwise, set to
Returns one of the DXGI_ERROR values.
When you are finished with the output, call
TakeOwnership should not be called directly by applications, since results will be unpredictable. It is called implicitly by the DXGI swap chain object during full-screen transitions, and should not be used as a substitute for swap-chain methods.
-Releases ownership of the output.
-If you are not using a swap chain, get access to an output by calling
Gets a description of the gamma-control capabilities.
-A reference to a description of the gamma-control capabilities (see
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Sets the gamma controls.
-A reference to a
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets the gamma control settings.
-An array of gamma control settings (see
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Changes the display mode.
-A reference to a surface (see
Returns one of the DXGI_ERROR values.
This method should only be called between
[Starting with Direct3D 11.1, we recommend not to use GetDisplaySurfaceData anymore to retrieve the current display surface. Instead, use
Gets a copy of the current display surface.
-Returns one of the DXGI_ERROR values.
Use
Gets statistics about recently rendered frames.
-A reference to frame statistics (see
If this function succeeds, it returns
This API is similar to
Note??Calling this method is only supported while in full-screen mode.? -
UINT num = 0;
- DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_FLOAT;
- UINT flags = DXGI_ENUM_MODES_INTERLACED; pOutput->GetDisplayModeList( format, flags, &num, 0); ... DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
- pOutput->GetDisplayModeList( format, flags, &num, pDescs);
-
-
- An
To determine the outputs that are available from the adapter, use
[Starting with Direct3D 11.1, we recommend not to use GetDisplayModeList anymore to retrieve the matching display mode. Instead, use
Gets the display modes that match the requested format and other input options.
-Returns one of the following DXGI_ERROR. It is rare, but possible, that the display modes available can change immediately after calling this method, in which case
In general, when switching from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth and refresh rate of the swap chain. To exercise more control over the display mode, use this API to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
As shown, this API is designed to be called twice. First to get the number of modes available, and second to return a description of the modes.
UINT num = 0; --format = ; - UINT flags = ; pOutput->GetDisplayModeList( format, flags, &num, 0); ... * pDescs = new [num]; - pOutput->GetDisplayModeList( format, flags, &num, pDescs);
Finds the display mode that most closely matches the requested display mode.
-A reference to the
A reference to the
A reference to the Direct3D device interface. If this parameter is
Returns one of the error codes described in the DXGI_ERROR topic.
Direct3D devices require UNORM formats.
FindClosestMatchingMode1 finds the closest matching available display mode to the mode that you specify in pModeToMatch.
If you set the Stereo member in the
FindClosestMatchingMode1 resolves similarly ranked members of display modes (that is, all specified, or all unspecified, and so on) in the following order:
When FindClosestMatchingMode1 determines the closest value for a particular member, it uses previously matched members to filter the display mode list choices, and ignores other members. For example, when FindClosestMatchingMode1 matches Resolution, it already filtered the display mode list by a certain ScanlineOrdering, Scaling, and Format, while it ignores RefreshRate. This ordering doesn't define the absolute ordering for every usage scenario of FindClosestMatchingMode1, because the application can choose some values initially, which effectively changes the order of resolving members.
FindClosestMatchingMode1 matches members of the display mode one at a time, generally in a specified order.
If a member is unspecified, FindClosestMatchingMode1 gravitates toward the values for the desktop related to this output. If this output is not part of the desktop, FindClosestMatchingMode1 uses the default desktop output to find values. If an application uses a fully unspecified display mode, FindClosestMatchingMode1 typically returns a display mode that matches the desktop settings for this output. Because unspecified members are lower priority than specified members, FindClosestMatchingMode1 resolves unspecified members later than specified members.
-Copies the display surface (front buffer) to a user-provided resource.
-A reference to a resource interface that represents the resource to which GetDisplaySurfaceData1 copies the display surface.
Returns one of the error codes described in the DXGI_ERROR topic.
GetDisplaySurfaceData1 is similar to
GetDisplaySurfaceData1 returns an error if the input resource is not a 2D texture (represented by the
The original
You can call GetDisplaySurfaceData1 only when an output is in full-screen mode. If GetDisplaySurfaceData1 succeeds, it fills the destination resource.
Use
Creates a desktop duplication interface from the
If an application wants to duplicate the entire desktop, it must create a desktop duplication interface on each active output on the desktop. This interface does not provide an explicit way to synchronize the timing of each output image. Instead, the application must use the time stamp of each output, and then determine how to combine the images.
For DuplicateOutput to succeed, you must create pDevice from
If the current mode is a stereo mode, the desktop duplication interface provides the image for the left stereo image only.
By default, only four processes can use a
For improved performance, consider using DuplicateOutput1.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets the display modes that match the requested format and other input options.
-A
A combination of DXGI_ENUM_MODES-typed values that are combined by using a bitwise OR operation. The resulting value specifies options for display modes to include. You must specify
GetDisplayModeList1 is updated from GetDisplayModeList to return a list of
The GetDisplayModeList1 method does not enumerate stereo modes unless you specify the
In general, when you switch from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth, and refresh rate of the swap chain. To exercise more control over the display mode, use GetDisplayModeList1 to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
The following example code shows that you need to call GetDisplayModeList1 twice. First call GetDisplayModeList1 to get the number of modes available, and second call GetDisplayModeList1 to return a description of the modes.
UINT num = 0;
- format = ;
- UINT flags = ; pOutput->GetDisplayModeList1( format, flags, &num, 0); ... * pDescs = new [num];
- pOutput->GetDisplayModeList1( format, flags, &num, pDescs);
- An
To see the outputs available, use
Queries an adapter output for multiplane overlay support. If this API returns ?TRUE?, multiple swap chain composition takes place in a performant manner using overlay hardware. If this API returns false, apps should avoid using foreground swap chains (that is, avoid using swap chains created with the
TRUE if the output adapter is the primary adapter and it supports multiplane overlays, otherwise returns
See CreateSwapChainForCoreWindow for info on creating a foreground swap chain.
-[This documentation is preliminary and is subject to change.]
Queries an adapter output for multiplane overlay support.
-TRUE if the output adapter is the primary adapter and it supports multiplane overlays, otherwise returns
Represents an adapter output (such as a monitor). The
Checks for overlay support.
-A
A reference to the Direct3D device interface. CheckOverlaySupport returns only support info about this scan-out device.
A reference to a variable that receives a combination of
Represents an adapter output (such as a monitor). The
Checks for overlay color space support.
-A
A
A reference to the Direct3D device interface. CheckOverlayColorSpaceSupport returns only support info about this scan-out device.
A reference to a variable that receives a combination of
An
To see the outputs available, use
Allows specifying a list of supported formats for fullscreen surfaces that can be returned by the
This method allows directly receiving the original back buffer format used by a running fullscreen application. For comparison, using the original DuplicateOutput function always converts the fullscreen surface to a 32-bit BGRA format. In cases where the current fullscreen application is using a different buffer format, a conversion to 32-bit BGRA incurs a performance penalty. Besides the performance benefit of being able to skip format conversion, using DuplicateOutput1 also allows receiving the full gamut of colors in cases where a high-color format (such as R10G10B10A2) is being presented.
The pSupportedFormats array should only contain display scan-out formats. See Format Support for Direct3D Feature Level 11.0 Hardware for required scan-out formats at each feature level. If the current fullscreen buffer format is not contained in the pSupportedFormats array, DXGI will pick one of the supplied formats and convert the fullscreen buffer to that format before returning from
An
To see the outputs available, use
The
A collaboration application can use
An application can use
The following components of the operating system can generate the desktop image:
All current
Examples of situations in which
In these situations, the application must release the
While the application processes each desktop image, the operating system accumulates all the desktop image updates into a single update. For more information about desktop updates, see Updating the desktop image data.
The desktop image is always in the
The
Retrieves a description of a duplicated output. This description specifies the dimensions of the surface that contains the desktop image.
-After an application creates an
Retrieves a description of a duplicated output. This description specifies the dimensions of the surface that contains the desktop image.
-A reference to a
After an application creates an
Indicates that the application is ready to process the next desktop image.
-The time-out interval, in milliseconds. This interval specifies the amount of time that this method waits for a new frame before it returns to the caller. This method returns if the interval elapses, and a new desktop image is not available.
For more information about the time-out interval, see Remarks.
A reference to a memory location that receives the
A reference to a variable that receives the
AcquireNextFrame returns:
When AcquireNextFrame returns successfully, the calling application can access the desktop image that AcquireNextFrame returns in the variable at ppDesktopResource. - If the caller specifies a zero time-out interval in the TimeoutInMilliseconds parameter, AcquireNextFrame verifies whether there is a new desktop image available, returns immediately, and indicates its outcome with the return value. If the caller specifies an INFINITE time-out interval in the TimeoutInMilliseconds parameter, the time-out interval never elapses.
Note??You cannot cancel the wait that you specified in the TimeoutInMilliseconds parameter. Therefore, if you must periodically check for other conditions (for example, a terminate signal), you should specify a non-INFINITE time-out interval. After the time-out interval elapses, you can check for these other conditions and then call AcquireNextFrame again to wait for the next frame.?AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware reference. The new frame that AcquireNextFrame acquires might have only the desktop image updated, only the reference shape or position updated, or both.
-Gets information about dirty rectangles for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pDirtyRectsBuffer parameter.
A reference to an array of
Pointer to a variable that receives the number of bytes that GetFrameDirtyRects needs to store information about dirty regions in the buffer at pDirtyRectsBuffer.
For more information about returning the required buffer size, see Remarks.
GetFrameDirtyRects returns:
GetFrameDirtyRects stores a size value in the variable at pDirtyRectsBufferSizeRequired. This value specifies the number of bytes that GetFrameDirtyRects needs to store information about dirty regions. You can use this value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pDirtyRectsBuffer:
The caller can also use the value returned at pDirtyRectsBufferSizeRequired to determine the number of
The buffer contains the list of dirty
Gets information about the moved rectangles for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pMoveRectBuffer parameter.
A reference to an array of
Pointer to a variable that receives the number of bytes that GetFrameMoveRects needs to store information about moved regions in the buffer at pMoveRectBuffer.
For more information about returning the required buffer size, see Remarks.
GetFrameMoveRects returns:
GetFrameMoveRects stores a size value in the variable at pMoveRectsBufferSizeRequired. This value specifies the number of bytes that GetFrameMoveRects needs to store information about moved regions. You can use this value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pMoveRectBuffer:
The caller can also use the value returned at pMoveRectsBufferSizeRequired to determine the number of
The buffer contains the list of move RECTs for the current frame.
Note??To produce a visually accurate copy of the desktop, an application must first process all move RECTs before it processes dirty RECTs.? -Gets information about the new reference shape for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pPointerShapeBuffer parameter.
A reference to a buffer to which GetFramePointerShape copies and returns pixel data for the new reference shape.
Pointer to a variable that receives the number of bytes that GetFramePointerShape needs to store the new reference shape pixel data in the buffer at pPointerShapeBuffer.
For more information about returning the required buffer size, see Remarks.
Pointer to a
GetFramePointerShape returns:
GetFramePointerShape stores a size value in the variable at pPointerShapeBufferSizeRequired. This value specifies the number of bytes that pPointerShapeBufferSizeRequired needs to store the new reference shape pixel data. You can use the value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pPointerShapeBuffer:
The pPointerShapeInfo parameter describes the new reference shape.
-Provides the CPU with efficient access to a desktop image if that desktop image is already in system memory.
-A reference to a
MapDesktopSurface returns:
You can successfully call MapDesktopSurface if the DesktopImageInSystemMemory member of the
Invalidates the reference to the desktop image that was retrieved by using
UnMapDesktopSurface returns:
Indicates that the application finished processing the frame.
-ReleaseFrame returns:
The application must release the frame before it acquires the next frame. After the frame is released, the surface that contains the desktop bitmap becomes invalid; you will not be able to use the surface in a DirectX graphics operation.
For performance reasons, we recommend that you release the frame just before you call the
Set the priority for evicting the resource from memory.
-The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-[Starting with Direct3D 11.1, we recommend not to use GetSharedHandle anymore to retrieve the handle to a shared resource. Instead, use
Gets the handle to a shared resource.
-GetSharedHandle returns a handle for the resource that you created as shared (that is, you set the
The creator of a shared resource must not destroy the resource until all intended entities have opened the resource. The validity of the handle is tied to the lifetime of the underlying video memory. If no resource objects exist on any devices that refer to this resource, the handle is no longer valid. To extend the lifetime of the handle and video memory, you must open the shared resource on a device.
GetSharedHandle can also return handles for resources that were passed into
GetSharedHandle fails if the resource to which it wants to get a handle is not shared.
-Get or sets the eviction priority.
-The eviction priority is a memory-management variable that is used by DXGI to determine how to manage overcommitted memory.
Priority levels other than the defined values are used when appropriate. For example, a resource with a priority level of 0x78000001 indicates that the resource is slightly above normal.
-[Starting with Direct3D 11.1, we recommend not to use GetSharedHandle anymore to retrieve the handle to a shared resource. Instead, use
Gets the handle to a shared resource.
-Returns one of the DXGI_ERROR values.
GetSharedHandle returns a handle for the resource that you created as shared (that is, you set the
The creator of a shared resource must not destroy the resource until all intended entities have opened the resource. The validity of the handle is tied to the lifetime of the underlying video memory. If no resource objects exist on any devices that refer to this resource, the handle is no longer valid. To extend the lifetime of the handle and video memory, you must open the shared resource on a device.
GetSharedHandle can also return handles for resources that were passed into
GetSharedHandle fails if the resource to which it wants to get a handle is not shared.
-Get the expected resource usage.
-A reference to a usage flag (see DXGI_USAGE). For Direct3D 10, a surface can be used as a shader input or a render-target output.
Returns one of the following DXGI_ERROR.
Set the priority for evicting the resource from memory.
-The priority is one of the following values:
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Returns one of the following DXGI_ERROR.
The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-Get the eviction priority.
-A reference to the eviction priority, which determines when a resource can be evicted from memory.
The following defined values are possible.
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Returns one of the following DXGI_ERROR.
The eviction priority is a memory-management variable that is used by DXGI to determine how to manage overcommitted memory.
Priority levels other than the defined values are used when appropriate. For example, a resource with a priority level of 0x78000001 indicates that the resource is slightly above normal.
- An
To determine the type of memory a resource is currently located in, use
You can retrieve the
* pDXGIResource; - hr = g_pd3dTexture2D->QueryInterface(__uuidof( ), (void **)&pDXGIResource);
Windows?Phone?8: This API is supported.
-Creates a subresource surface object.
-The index of the subresource surface object to enumerate.
The address of a reference to a
Returns
A subresource is a valid surface if the original resource would have been a valid surface had its array size been equal to 1.
Subresource surface objects implement the
CreateSubresourceSurface creates a subresource surface that is based on the resource interface on which CreateSubresourceSurface is called. For example, if the original resource interface object is a 2D texture, the created subresource surface is also a 2D texture.
You can use CreateSubresourceSurface to create parts of a stereo resource so you can use Direct2D on either the left or right part of the stereo resource.
-Creates a handle to a shared resource. You can then use the returned handle with multiple Direct3D devices.
-A reference to a
Set this parameter to
The lpSecurityDescriptor member of the structure specifies a SECURITY_DESCRIPTOR for the resource. Set this member to
The requested access rights to the resource. In addition to the generic access rights, DXGI defines the following values:
You can combine these values by using a bitwise OR operation.
The name of the resource to share. The name is limited to MAX_PATH characters. Name comparison is case sensitive. You will need the resource name if you call the
If lpName matches the name of an existing resource, CreateSharedHandle fails with
The name can have a "Global\" or "Local\" prefix to explicitly create the object in the global or session namespace. The remainder of the name can contain any character except the backslash character (\). For more information, see Kernel Object Namespaces. Fast user switching is implemented using Terminal Services sessions. Kernel object names must follow the guidelines outlined for Terminal Services so that applications can support multiple users.
The object can be created in a private namespace. For more information, see Object Namespaces.
A reference to a variable that receives the NT HANDLE value to the resource to share. You can use this handle in calls to access the resource.
CreateSharedHandle only returns the NT handle when you created the resource as shared and specified that it uses NT handles (that is, you set the
You can pass the handle that CreateSharedHandle returns in a call to the
Because the handle that CreateSharedHandle returns is an NT handle, you can use the handle with CloseHandle, DuplicateHandle, and so on. You can call CreateSharedHandle only once for a shared resource; later calls fail. If you need more handles to the same shared resource, call DuplicateHandle. When you no longer need the shared resource handle, call CloseHandle to close the handle, in order to avoid memory leaks.
If you pass a name for the resource to lpName when you call CreateSharedHandle to share the resource, you can subsequently pass this name in a call to the
If you created the resource as shared and did not specify that it uses NT handles, you cannot use CreateSharedHandle to get a handle for sharing because CreateSharedHandle will fail.
-The
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
The runtime automatically creates an
Get a description of the surface.
-Get a description of the surface.
-A reference to the surface description (see
Returns
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-A reference to the surface data (see
CPU read-write flags. These flags can be combined with a logical OR.
Returns
Use
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-Returns
Use
The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
Any object that supports
The runtime automatically creates an
Returns a device context (DC) that allows you to render to a Microsoft DirectX Graphics Infrastructure (DXGI) surface using Windows Graphics Device Interface (GDI).
-A Boolean value that specifies whether to preserve Direct3D contents in the GDI DC. TRUE directs the runtime not to preserve Direct3D contents in the GDI DC; that is, the runtime discards the Direct3D contents.
A reference to an
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
After you use the GetDC method to retrieve a DC, you can render to the DXGI surface by using GDI. The GetDC method readies the surface for GDI rendering and allows inter-operation between DXGI and GDI technologies.
Keep the following in mind when using this method:
You can also call GetDC on the back buffer at index 0 of a swap chain by obtaining an
-* g_pSwapChain = null ; -* g_pSurface1 = null ; - ... - //Setup the device and and swapchain - g_pSwapChain->GetBuffer(0, __uuidof(), (void**) &g_pSurface1); - g_pSurface1->GetDC( , &g_hDC ); - ... - //Draw on the DC using GDI - ... - //When finish drawing release the DC - g_pSurface1->ReleaseDC( null );
Releases the GDI device context (DC) that is associated with the current surface and allows you to use Direct3D to render.
-A reference to a
You can pass a reference to an empty
If this method succeeds, it returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the ReleaseDC method to release the DC and indicate that your application finished all GDI rendering to this surface. You must call the ReleaseDC method before you can use Direct3D to perform additional rendering.
Prior to resizing buffers you must release all outstanding DCs.
-The
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
Any object that supports
The runtime automatically creates an
You can call the
Gets the parent resource and subresource index that support a subresource surface.
-The globally unique identifier (
A reference to a buffer that receives a reference to the parent resource object for the subresource surface.
A reference to a variable that receives the index of the subresource surface.
Returns
For subresource surface objects that the
Current objects that implement
An
You can create a swap chain by
- calling
[Starting with Direct3D 11.1, we recommend not to use GetDesc anymore to get a description of the swap chain. Instead, use
Get a description of the swap chain.
-Get the output (the display monitor) that contains the majority of the client area of the target window.
-If the method succeeds, the output interface will be filled and its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
You cannot call GetContainingOutput on a swap chain that you created with
Gets the number of times that
For info about presentation statistics for a frame, see
Presents a rendered image to the user.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank.
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
Possible return values include:
Starting with Direct3D 11.1, consider using
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-Accesses one of the swap-chain's back buffers.
-A zero-based buffer index.
If the swap chain's swap effect is
If the swap chain's swap effect is either
The type of interface used to manipulate the buffer.
A reference to a back-buffer interface.
Returns one of the following DXGI_ERROR.
Sets the display state to windowed or full screen.
-A Boolean value that specifies whether to set the display state to windowed or full screen. TRUE for full screen, and
If you pass TRUE to the Fullscreen parameter to set the display state to full screen, you can optionally set this parameter to a reference to an
This methods returns:
When this error is returned, an application can continue to run in windowed mode and try to switch to full-screen mode later.
DXGI may change the display state of a swap chain in response to end user or system requests.
We recommend that you create a windowed swap chain and allow the end user to change the swap chain to full screen through SetFullscreenState; that is, do not set the Windowed member of
Get the state associated with full-screen mode.
-A reference to a boolean whose value is either:
A reference to the output target (see
Returns one of the following DXGI_ERROR.
When the swap chain is in full-screen mode, a reference to the target output will be returned and its reference count will be incremented.
-[Starting with Direct3D 11.1, we recommend not to use GetDesc anymore to get a description of the swap chain. Instead, use
Get a description of the swap chain.
-Returns one of the following DXGI_ERROR.
Changes the swap chain's back buffer size, format, and number of buffers. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
Returns
You can't resize a swap chain unless you release all outstanding references to its back buffers. You must release all of its direct and indirect references on the back buffers in order for ResizeBuffers to succeed.
Direct references are held by the application after it calls AddRef on a resource.
Indirect references are held by views to a resource, binding a view of the resource to a device context, a command list that used the resource, a command list that used a view to that resource, a command list that executed another command list that used the resource, and so on.
Before you call ResizeBuffers, ensure that the application releases all references (by calling the appropriate number of Release invocations) on the resources, any views to the resource, and any command lists that use either the resources or views, and ensure that neither the resource nor a view is still bound to a device context. You can use
For swap chains that you created with
We recommend that you call ResizeBuffers when a client window is resized (that is, when an application receives a WM_SIZE message).
The only difference between
Resizes the output target.
-A reference to a
Returns a code that indicates success or failure.
ResizeTarget resizes the target window when the swap chain is in windowed mode, and changes the display mode on the target output when the swap chain is in full-screen mode. Therefore, apps can call ResizeTarget to resize the target window (rather than a Microsoft Win32API such as SetWindowPos) without knowledge of the swap chain display mode.
If a Windows Store app calls ResizeTarget, it fails with
You cannot call ResizeTarget on a swap chain that you created with
Apps must still call
Get the output (the display monitor) that contains the majority of the client area of the target window.
-A reference to the output interface (see
Returns one of the following DXGI_ERROR.
If the method succeeds, the output interface will be filled and its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
You cannot call GetContainingOutput on a swap chain that you created with
Gets performance statistics about the last render frame.
-A reference to a
Returns one of the DXGI_ERROR values.
You cannot use GetFrameStatistics for swap chains that both use the bit-block transfer (bitblt) presentation model and draw in windowed mode.
You can only use GetFrameStatistics for swap chains that either use the flip presentation model or draw in full-screen mode. You set the
Gets the number of times that
Returns one of the DXGI_ERROR values.
For info about presentation statistics for a frame, see
Gets performance statistics about the last render frame.
-You cannot use GetFrameStatistics for swap chains that both use the bit-block transfer (bitblt) presentation model and draw in windowed mode.
You can only use GetFrameStatistics for swap chains that either use the flip presentation model or draw in full-screen mode. You set the
[Starting with Direct3D 11.1, we recommend not to use Present anymore to present a rendered image. Instead, use
Presents a rendered image to the user.
-Possible return values include:
Note??The Present method can return either
Starting with Direct3D 11.1, we recommend to instead use
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-Provides presentation capabilities that are enhanced from
You can create a swap chain by
- calling
Gets a description of the swap chain.
-Gets a description of a full-screen swap chain.
-The semantics of GetFullscreenDesc are identical to that of the IDXGISwapchain::GetDesc method for
Retrieves the underlying
Applications call the
Determines whether a swap chain supports ?temporary mono.?
-Temporary mono is a feature where a stereo swap chain can be presented using only the content in the left buffer. To present using the left buffer as a mono buffer, an application calls the
Gets the output (the display monitor) to which you can restrict the contents of a present operation.
-If the method succeeds, the runtime fills the buffer at ppRestrictToOutput with a reference to the restrict-to output interface. This restrict-to output interface has its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
-Retrieves or sets the background color of the swap chain.
-Gets or sets the rotation of the back buffers for the swap chain.
-Gets a description of the swap chain.
-A reference to a
Returns
Gets a description of a full-screen swap chain.
-A reference to a
GetFullscreenDesc returns:
The semantics of GetFullscreenDesc are identical to that of the IDXGISwapchain::GetDesc method for
Retrieves the underlying
Returns
If pHwnd receives
Applications call the
Retrieves the underlying CoreWindow object for this swap-chain object.
-GetCoreWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetCoreWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Applications call the
Presents a frame on the display screen.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank.
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
A reference to a
Possible return values include:
An app can use Present1 to optimize presentation by specifying scroll and dirty rectangles. When the runtime has information about these rectangles, the runtime can then perform necessary bitblts during presentation more efficiently and pass this metadata to the Desktop Window Manager (DWM). The DWM can then use the metadata to optimize presentation and pass the metadata to indirect displays and terminal servers to optimize traffic over the wire. An app must confine its modifications to only the dirty regions that it passes to Present1, as well as modify the entire dirty region to avoid undefined resource contents from being exposed.
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
For info about calling Present1 when your app uses multiple threads, see Multithread Considerations and Multithreading and DXGI.
-Determines whether a swap chain supports ?temporary mono.?
-Indicates whether to use the swap chain in temporary mono mode. TRUE indicates that you can use temporary-mono mode; otherwise,
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsTemporaryMonoSupported always returns
Temporary mono is a feature where a stereo swap chain can be presented using only the content in the left buffer. To present using the left buffer as a mono buffer, an application calls the
Gets the output (the display monitor) to which you can restrict the contents of a present operation.
- A reference to a buffer that receives a reference to the
Returns
If the method succeeds, the runtime fills the buffer at ppRestrictToOutput with a reference to the restrict-to output interface. This restrict-to output interface has its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
-Changes the background color of the swap chain.
-A reference to a DXGI_RGBA structure that specifies the background color to set.
SetBackgroundColor returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetBackgroundColor fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
The background color affects only swap chains that you create with
When you set the background color, it is not immediately realized. It takes effect in conjunction with your next call to the
When you call the
Retrieves the background color of the swap chain.
-A reference to a DXGI_RGBA structure that receives the background color of the swap chain.
GetBackgroundColor returns:
Sets the rotation of the back buffers for the swap chain.
-A
SetRotation returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetRotation fails with
You can only use SetRotation to rotate the back buffers for flip-model swap chains that you present in windowed mode.
SetRotation isn't supported for rotating the back buffers for flip-model swap chains that you present in full-screen mode. In this situation, SetRotation doesn't fail, but you must ensure that you specify no rotation (
Gets the rotation of the back buffers for the swap chain.
-A reference to a variable that receives a
Returns
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetRotation fails with
Extends
You can create a swap chain by
- calling
Gets or sets the number of frames that the swap chain is allowed to queue for rendering.
-Returns a waitable handle that signals when the DXGI adapter has finished presenting a new frame.
Windows?8.1 introduces new APIs that allow lower-latency rendering by waiting until the previous frame is presented to the display before drawing the next frame. To use this method, first create the DXGI swap chain with the
Gets or sets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-Sets the source region to be used for the swap chain.
Use SetSourceSize to specify the portion of the swap chain from which the operating system presents. This allows an effective resize without calling the more-expensive
This method can return:
Gets the source region used for the swap chain.
Use GetSourceSize to get the portion of the swap chain from which the operating system presents. The source rectangle is always defined by the region [0, 0, Width, Height]. Use SetSourceSize to set this portion of the swap chain.
-This method can return error codes that are described in the DXGI_ERROR topic.
Sets the number of frames that the swap chain is allowed to queue for rendering.
-The maximum number of back buffer frames that will be queued for the swap chain. This value is 3 by default.
Returns
This method is only valid for use on swap chains created with
Gets the number of frames that the swap chain is allowed to queue for rendering.
-The maximum number of back buffer frames that will be queued for the swap chain. This value is 1 by default, but should be set to 2 if the scene takes longer than it takes for one vertical refresh (typically about 16ms) to draw.
Returns
Returns a waitable handle that signals when the DXGI adapter has finished presenting a new frame.
Windows?8.1 introduces new APIs that allow lower-latency rendering by waiting until the previous frame is presented to the display before drawing the next frame. To use this method, first create the DXGI swap chain with the
A handle to the waitable object, or
Sets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-SetMatrixTransform returns:
Gets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-GetMatrixTransform returns:
[This documentation is preliminary and is subject to change.]
Gets the source region used for the swap chain.
Use GetSourceSize to get the portion of the swap chain from which the operating system presents. The source rectangle is always defined by the region [0, 0, Width, Height]. Use SetSourceSize to set this portion of the swap chain.
-This method can return error codes that are described in the DXGI_ERROR topic.
Extends
Gets the index of the swap chain's current back buffer.
-Sets the color space used by the swap chain.
-Gets the index of the swap chain's current back buffer.
-Returns the index of the current back buffer.
Checks the swap chain's support for color space.
-A
A reference to a variable that receives a combination of
Sets the color space used by the swap chain.
-A
This method returns
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
An
You can create a swap chain by
- calling
This method sets High Dynamic Range (HDR) and Wide Color Gamut (WCG) header metadata.
-Specifies one member of the
Specifies the size of pMetaData, in bytes.
Specifies a void reference that references the metadata, if it exists. Refer to the
This method returns an
This method sets metadata to enable a monitor's output to be adjusted depending on its capabilities.
-This swap chain interface allows desktop media applications to request a seamless change to a specific refresh rate.
For example, a media application presenting video at a typical framerate of 23.997 frames per second can request a custom refresh rate of 24 or 48 Hz to eliminate jitter. If the request is approved, the app starts presenting frames at the custom refresh rate immediately - without the typical 'mode switch' a user would experience when changing the refresh rate themselves by using the control panel.
-Seamless changes to custom framerates can only be done on integrated panels. Custom frame rates cannot be applied to external displays. If the DXGI output adapter is attached to an external display then CheckPresentDurationSupport will return (0, 0) for upper and lower bounds, indicating that the device does not support seamless refresh rate changes.
Custom refresh rates can be used when displaying video with a dynamic framerate. However, the refresh rate change should be kept imperceptible to the user. A best practice for keeping the refresh rate transition imperceptible is to only set the custom framerate if the app determines it can present at that rate for least 5 seconds.
-Queries the system for a
Requests a custom presentation duration (custom refresh rate).
-Queries the system for a
This method returns
Requests a custom presentation duration (custom refresh rate).
-The custom presentation duration, specified in hundreds of nanoseconds.
This method returns
Queries the graphics driver for a supported frame present duration corresponding to a custom refresh rate.
-Indicates the frame duration to check. This value is the duration of one frame at the desired refresh rate, specified in hundreds of nanoseconds. For example, set this field to 167777 to check for 60 Hz refresh rate support.
A variable that will be set to the closest supported frame present duration that's smaller than the requested value, or zero if the device does not support any lower duration.
A variable that will be set to the closest supported frame present duration that's larger than the requested value, or zero if the device does not support any higher duration.
This method returns
If the DXGI output adapter does not support custom refresh rates (for example, an external display) then the display driver will set upper and lower bounds to (0, 0).
-Describes an adapter (or video card) by using DXGI 1.0.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
Describes an adapter (or video card) using DXGI 1.1.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc1 returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
A value of the
Describes an adapter (or video card) that uses Microsoft DirectX Graphics Infrastructure (DXGI) 1.2.
-The
A string that contains the adapter description.
The PCI ID of the hardware vendor.
The PCI ID of the hardware device.
The PCI ID of the sub system.
The PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
A value of the
A value of the
A value of the
Describes an adapter (or video card) by using DXGI 1.0.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
Used with
Describes timing and presentation statistics for a frame.
-You initialize the
You can only use
The values in the PresentCount and PresentRefreshCount members indicate information about when a frame was presented on the display screen. You can use these values to determine whether a glitch occurred. The values in the SyncRefreshCount and SyncQPCTime members indicate timing information that you can use for audio and video synchronization or very precise animation. If the swap chain draws in full-screen mode, these values are based on when the computer booted. - If the swap chain draws in windowed mode, these values are based on when the swap chain is created.
-A value that represents the running total count of times that an image was presented to the monitor since the computer booted.
Note??The number of times that an image was presented to the monitor is not necessarily the same as the number of times that you calledA value that represents the running total count of v-blanks at which the last image was presented to the monitor and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the running total count of v-blanks when the scheduler last sampled the machine time by calling QueryPerformanceCounter and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the high-resolution performance counter timer. This value is the same as the value returned by the QueryPerformanceCounter function.
Reserved. Always returns 0.
Used to verify system approval for the app's custom present duration (custom refresh rate). Approval should be continuously verified on a frame-by-frame basis.
-This structure is used with the GetFrameStatisticsMedia method.
-A value that represents the running total count of times that an image was presented to the monitor since the computer booted.
Note??The number of times that an image was presented to the monitor is not necessarily the same as the number of times that you calledA value that represents the running total count of v-blanks at which the last image was presented to the monitor and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the running total count of v-blanks when the scheduler last sampled the machine time by calling QueryPerformanceCounter and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the high-resolution performance counter timer. This value is the same as the value returned by the QueryPerformanceCounter function.
Reserved. Always returns 0.
A value indicating the composition presentation mode. This value is used to determine whether the app should continue to use the decode swap chain. See
If the system approves an app's custom present duration request, this field is set to the approved custom present duration.
If the app's custom present duration request is not approved, this field is set to zero.
Controls the settings of a gamma curve.
-The
For info about using gamma correction, see Using gamma correction.
-A
A
An array of
Controls the gamma capabilities of an adapter.
-To get a list of the capabilities for controlling gamma correction, call
For info about using gamma correction, see Using gamma correction.
-True if scaling and offset operations are supported during gamma correction; otherwise, false.
A value describing the maximum range of the control-point positions.
A value describing the minimum range of the control-point positions.
A value describing the number of control points in the array.
An array of values describing control points; the maximum length of control points is 1025.
Describes the 10 bit display metadata, and is usually used for video. This is used to adjust the output to best match a display's capabilities.
-The X and Y coordinates of the parameters mean the xy chromacity coordinate in the CIE1931 color space. The values are normalized to 50000, so to get a value between 0.0 and 1.0, divide by 50000.
This structure is used in conjunction with the SetHDRMetaData method.
-The chromaticity coordinates of the 1.0 red value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the 1.0 green value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the 1.0 blue value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the white point. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The maximum number of nits of the display used to master the content. Units are 0.0001 nit, so if the value is 1 nit, the value should be 10,000.
The minimum number of nits (in units of 0.00001 nit) of the display used to master the content.
The maximum nit value (in units of 0.00001 nit) used anywhere in the content.
The per-frame average of the maximum nit values (in units of 0.00001 nit).
Describes a JPEG AC huffman table.
-The number of codes for each code length.
The Huffman code values, in order of increasing code length.
Describes a JPEG DC huffman table.
-The number of codes for each code length.
The Huffman code values, in order of increasing code length.
Describes a JPEG quantization table.
-An array of bytes containing the elements of the quantization table.
Describes a mapped rectangle that is used to access a surface.
-The
A value that describes the width, in bytes, of the surface.
A reference to the image buffer of the surface.
Describes a display mode.
-This structure is used by the GetDisplayModeList and FindClosestMatchingMode methods.
The following format values are valid for display modes and when you create a bit-block transfer (bitblt) model swap chain. The valid values depend on the feature level that you are working with.
Feature level >= 9.1
Feature level >= 10.0
Feature level >= 11.0
You can pass one of these format values to
Starting with Windows?8 for a flip model swap chain (that is, a swap chain that has the
Because of the relaxed render target creation rules that Direct3D 11 has for back buffers, applications can create a
A value that describes the resolution width. If you specify the width as zero when you call the
A value describing the resolution height. If you specify the height as zero when you call the
A
A
A member of the
A member of the
Describes a display mode and whether the display mode supports stereo.
-This structure is used by the GetDisplayModeList1 and FindClosestMatchingMode1 methods.
-A value that describes the resolution width.
A value that describes the resolution height.
A
A
A
A
Specifies whether the full-screen display mode is stereo. TRUE if stereo; otherwise,
Describes an output or physical connection between the adapter (video card) and a device.
-The
A string that contains the name of the output device.
A
True if the output is attached to the desktop; otherwise, false.
A member of the
An
Describes an output or physical connection between the adapter (video card) and a device.
-The
A string that contains the name of the output device.
A
True if the output is attached to the desktop; otherwise, false.
A member of the
An
The
This structure is used by GetDesc.
-The
A non-zero LastMouseUpdateTime indicates an update to either a mouse reference position or a mouse reference position and shape. That is, the mouse reference position is always valid for a non-zero LastMouseUpdateTime; however, the application must check the value of the PointerShapeBufferSize member to determine whether the shape was updated too.
If only the reference was updated (that is, the desktop image was not updated), the AccumulatedFrames, TotalMetadataBufferSize, and LastPresentTime members are set to zero.
An AccumulatedFrames value of one indicates that the application completed processing the last frame before a new desktop image was presented. If the AccumulatedFrames value is greater than one, more desktop image updates have occurred while the application processed the last desktop update. In this situation, the operating system accumulated the update regions. For more information about desktop updates, see Desktop Update Data.
A non-zero TotalMetadataBufferSize indicates the total size of the buffers that are required to store all the desktop update metadata. An application cannot determine the size of each type of metadata. The application must call the
The time stamp of the last update of the desktop image. The operating system calls the QueryPerformanceCounter function to obtain the value. A zero value indicates that the desktop image was not updated since an application last called the
The time stamp of the last update to the mouse. The operating system calls the QueryPerformanceCounter function to obtain the value. A zero value indicates that the position or shape of the mouse was not updated since an application last called the
The number of frames that the operating system accumulated in the desktop image surface since the calling application processed the last desktop image. For more information about this number, see Remarks.
Specifies whether the operating system accumulated updates by coalescing dirty regions. Therefore, the dirty regions might contain unmodified pixels. TRUE if dirty regions were accumulated; otherwise,
Specifies whether the desktop image might contain protected content that was already blacked out in the desktop image. TRUE if protected content was already blacked; otherwise,
A
Size in bytes of the buffers to store all the desktop update metadata for this frame. For more information about this size, see Remarks.
Size in bytes of the buffer to hold the new pixel data for the mouse shape. For more information about this size, see Remarks.
The
This structure is used by GetFrameMoveRects.
-The starting position of a rectangle.
The target region to which to move a rectangle.
The
The Position member is valid only if the Visible member?s value is set to TRUE.
-The position of the hardware cursor relative to the top-left of the adapter output.
Specifies whether the hardware cursor is visible. TRUE if visible; otherwise,
The
An application draws the cursor shape with the top-left-hand corner drawn at the position that the Position member of the
An application calls the
A
The width in pixels of the mouse cursor.
The height in scan lines of the mouse cursor.
The width in bytes of the mouse cursor.
The position of the cursor's hot spot relative to its upper-left pixel. An application does not use the hot spot when it determines where to draw the cursor shape.
Describes information about present that helps the operating system optimize presentation.
-This structure is used by the Present1 method.
The scroll rectangle and the list of dirty rectangles could overlap. In this situation, the dirty rectangles take priority. Applications can then have pieces of dynamic content on top of a scrolled area. For example, an application could scroll a page and play video at the same time.
The following diagram and coordinates illustrate this example.
DirtyRectsCount = 2
- pDirtyRects[ 0 ] = { 10, 30, 40, 50 } // Video
- pDirtyRects[ 1 ] = { 0, 70, 50, 80 } // New line
- *pScrollRect = { 0, 0, 50, 70 }
- *pScrollOffset = { 0, -10 }
-
Parts of the previous frame and content that the application renders are combined to produce the final frame that the operating system presents on the display screen. Most of the window is scrolled from the previous frame. The application must update the video frame with the new chunk of content that appears due to scrolling.
The dashed rectangle shows the scroll rectangle in the current frame. The scroll rectangle is specified by the pScrollRect member. - The arrow shows the scroll offset. The scroll offset is specified by the pScrollOffset member. - Filled rectangles show dirty rectangles that the application updated with new content. The filled rectangles are specified by the DirtyRectsCount and pDirtyRects members.
The scroll rectangle and offset are not supported for the
The actual implementation of composition and necessary bitblts is different for the bitblt model and the flip model. For more info about these models, see DXGI Flip Model.
For more info about the flip-model swap chain and optimizing presentation, see Enhancing presentation with the flip model, dirty rectangles, and scrolled areas.
-The number of updated rectangles that you update in the back buffer for the presented frame. The operating system uses this information to optimize presentation. You can set this member to 0 to indicate that you update the whole frame.
A list of updated rectangles that you update in the back buffer for the presented frame. An application must update every single pixel in each rectangle that it reports to the runtime; the application cannot assume that the pixels are saved from the previous frame. For more information about updating dirty rectangles, see Remarks. You can set this member to
A reference to the scrolled rectangle. The scrolled rectangle is the rectangle of the previous frame from which the runtime bit-block transfers (bitblts) content. The runtime also uses the scrolled rectangle to optimize presentation in terminal server and indirect display scenarios.
The scrolled rectangle also describes the destination rectangle, that is, the region on the current frame that is filled with scrolled content. You can set this member to
A reference to the offset of the scrolled area that goes from the source rectangle (of previous frame) to the destination rectangle (of current frame). You can set this member to
Describes the current video memory budgeting parameters.
-Use this structure with QueryVideoMemoryInfo.
Refer to the remarks for
Specifies the OS-provided video memory budget, in bytes, that the application should target. If CurrentUsage is greater than Budget, the application may incur stuttering or performance penalties due to background activity by the OS to provide other applications with a fair usage of video memory.
Specifies the application?s current video memory usage, in bytes.
The amount of video memory, in bytes, that the application has available for reservation. To reserve this video memory, the application should call
The amount of video memory, in bytes, that is reserved by the application. The OS uses the reservation as a hint to determine the application?s minimum working set. Applications should attempt to ensure that their video memory usage can be trimmed to meet this requirement.
Represents a rational number.
-This structure is a member of the
The
An unsigned integer value representing the top of the rational number.
An unsigned integer value representing the bottom of the rational number.
Describes multi-sampling parameters for a resource.
-This structure is a member of the
The default sampler mode, with no anti-aliasing, has a count of 1 and a quality level of 0.
If multi-sample antialiasing is being used, all bound render targets and depth buffers must have the same sample counts and quality levels.
Differences between Direct3D 10.0 and Direct3D 10.1 and between Direct3D 10.0 and Direct3D 11: Direct3D 10.1 has defined two standard quality levels: D3D10_STANDARD_MULTISAMPLE_PATTERN and D3D10_CENTER_MULTISAMPLE_PATTERN in the D3D10_STANDARD_MULTISAMPLE_QUALITY_LEVELS enumeration in D3D10_1.h. Direct3D 11 has defined two standard quality levels: |
?
-The number of multisamples per pixel.
The image quality level. The higher the quality, the lower the performance. The valid range is between zero and one less than the level returned by ID3D10Device::CheckMultisampleQualityLevels for Direct3D 10 or
For Direct3D 10.1 and Direct3D 11, you can use two special quality level values. For more information about these quality level values, see Remarks.
Represents a handle to a shared resource.
-To create a shared surface, pass a shared-resource handle into the
A handle to a shared resource.
Describes a surface.
-This structure is used by the GetDesc and CreateSurface methods.
-A value describing the surface width.
A value describing the surface height.
A member of the
A member of the
Describes a swap chain.
-This structure is used by the GetDesc and CreateSwapChain methods.
In full-screen mode, there is a dedicated front buffer; in windowed mode, the desktop is the front buffer.
If you create a swap chain with one buffer, specifying
For performance information about flipping swap-chain buffers in full-screen application, see Full-Screen Application Performance Hints.
-A
A
A member of the DXGI_USAGE enumerated type that describes the surface usage and CPU access options for the back buffer. The back buffer can be used for shader input or render-target output.
A value that describes the number of buffers in the swap chain. When you call
An
A Boolean value that specifies whether the output is in windowed mode. TRUE if the output is in windowed mode; otherwise,
We recommend that you create a windowed swap chain and allow the end user to change the swap chain to full screen through
For more information about choosing windowed verses full screen, see
A member of the
A member of the
Describes a swap chain.
-This structure is used by the CreateSwapChainForHwnd, CreateSwapChainForCoreWindow, CreateSwapChainForComposition, CreateSwapChainForCompositionSurfaceHandle, and GetDesc1 methods.
Note??You cannot cast aIn full-screen mode, there is a dedicated front buffer; in windowed mode, the desktop is the front buffer.
For a flip-model swap chain (that is, a swap chain that has the
A value that describes the resolution width. If you specify the width as zero when you call the
A value that describes the resolution height. If you specify the height as zero when you call the
A
Specifies whether the full-screen display mode or the swap-chain back buffer is stereo. TRUE if stereo; otherwise,
A
A DXGI_USAGE-typed value that describes the surface usage and CPU access options for the back buffer. The back buffer can be used for shader input or render-target output.
A value that describes the number of buffers in the swap chain. When you create a full-screen swap chain, you typically include the front buffer in this value.
A
A
A
A combination of
Describes full-screen mode for a swap chain.
-This structure is used by the CreateSwapChainForHwnd and GetFullscreenDesc methods.
-A
A member of the
A member of the
A Boolean value that specifies whether the swap chain is in windowed mode. TRUE if the swap chain is in windowed mode; otherwise,
Supplies data to an analysis effect.
- This interface can be implemented by either an
Supplies the analysis data to an analysis transform.
-The data that the transform will analyze.
The size of the analysis data.
If the method succeeds, it returns
The output of the transform will be copied to CPU-accessible memory by the imaging effects system before being passed to the implementation.
If this call fails, the corresponding
Represents a bitmap that has been bound to an
Returns the size, in device-independent pixels (DIPs), of the bitmap.
-A DIP is 1/96?of an inch. To retrieve the size in device pixels, use the
Returns the size, in device-dependent units (pixels), of the bitmap.
-Retrieves the pixel format and alpha mode of the bitmap.
-Returns the size, in device-independent pixels (DIPs), of the bitmap.
-The size, in DIPs, of the bitmap.
A DIP is 1/96?of an inch. To retrieve the size in device pixels, use the
Returns the size, in device-dependent units (pixels), of the bitmap.
-The size, in pixels, of the bitmap.
Retrieves the pixel format and alpha mode of the bitmap.
-The pixel format and alpha mode of the bitmap.
Return the dots per inch (DPI) of the bitmap.
-The horizontal DPI of the image. You must allocate storage for this parameter.
The vertical DPI of the image. You must allocate storage for this parameter.
Copies the specified region from the specified bitmap into the current bitmap.
-In the current bitmap, the upper-left corner of the area to which the region specified by srcRect is copied.
The bitmap to copy from.
The area of bitmap to copy.
If this method succeeds, it returns
This method does not update the size of the current bitmap. If the contents of the source bitmap do not fit in the current bitmap, this method fails. Also, note that this method does not perform format conversion, and will fail if the bitmap formats do not match.
Calling this method may cause the current batch to flush if the bitmap is active in the batch. If the batch that was flushed does not complete successfully, this method fails. However, this method does not clear the error state of the render target on which the batch was flushed. The failing
Starting with Windows?8.1, this method supports block compressed bitmaps. If you are using a block compressed format, the end coordinates of the srcRect parameter must be multiples of 4 or the method returns E_INVALIDARG.
-Copies the specified region from the specified render target into the current bitmap.
-In the current bitmap, the upper-left corner of the area to which the region specified by srcRect is copied.
The render target that contains the region to copy.
The area of renderTarget to copy.
If this method succeeds, it returns
This method does not update the size of the current bitmap. If the contents of the source bitmap do not fit in the current bitmap, this method fails. Also, note that this method does not perform format conversion, and will fail if the bitmap formats do not match.
Calling this method may cause the current batch to flush if the bitmap is active in the batch. If the batch that was flushed does not complete successfully, this method fails. However, this method does not clear the error state of the render target on which the batch was flushed. The failing
All clips and layers must be popped off of the render target before calling this method. The method returns
Copies the specified region from memory into the current bitmap.
-In the current bitmap, the upper-left corner of the area to which the region specified by srcRect is copied.
The data to copy.
The stride, or pitch, of the source bitmap stored in srcData. The stride is the byte count of a scanline (one row of pixels in memory). The stride can be computed from the following formula: pixel width * bytes per pixel + memory padding.
If this method succeeds, it returns
This method does not update the size of the current bitmap. If the contents of the source bitmap do not fit in the current bitmap, this method fails. Also, note that this method does not perform format conversion; the two bitmap formats should match.
If this method is passed invalid input (such as an invalid destination rectangle), can produce unpredictable results, such as a distorted image or device failure.
Calling this method may cause the current batch to flush if the bitmap is active in the batch. If the batch that was flushed does not complete successfully, this method fails. However, this method does not clear the error state of the render target on which the batch was flushed. The failing
Starting with Windows?8.1, this method supports block compressed bitmaps. If you are using a block compressed format, the end coordinates of the srcRect parameter must be multiples of 4 or the method returns E_INVALIDARG.
- Represents a bitmap that can be used as a surface for an
Gets the color context information associated with the bitmap.
-If the bitmap was created without specifying a color context, the returned context is
Gets the options used in creating the bitmap.
-Gets either the surface that was specified when the bitmap was created, or the default surface created when the bitmap was created.
-The bitmap used must have been created from a DXGI surface render target, a derived render target, or a device context created from an
The returned surface can be used with Microsoft Direct3D or any other API that interoperates with shared surfaces. The application must transitively ensure that the surface is usable on the Direct3D device that is used in this context. For example, if using the surface with Direct2D then the Direct2D render target must have been created through
Gets the color context information associated with the bitmap.
-When this method returns, contains the address of a reference to the color context interface associated with the bitmap.
If the bitmap was created without specifying a color context, the returned context is
Gets the options used in creating the bitmap.
-This method returns the options used.
Gets either the surface that was specified when the bitmap was created, or the default surface created when the bitmap was created.
-The underlying DXGI surface for the bitmap.
The method returns an
Description | |
---|---|
No error occurred. | |
Cannot draw with a bitmap that is currently bound as the target bitmap. |
?
The bitmap used must have been created from a DXGI surface render target, a derived render target, or a device context created from an
The returned surface can be used with Microsoft Direct3D or any other API that interoperates with shared surfaces. The application must transitively ensure that the surface is usable on the Direct3D device that is used in this context. For example, if using the surface with Direct2D then the Direct2D render target must have been created through
Maps the given bitmap into memory.
-The options used in mapping the bitmap into memory.
When this method returns, contains a reference to the rectangle that is mapped into memory.
The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | One or more arguments are not valid |
D3DERR_DEVICELOST | The device has been lost but cannot be reset at this time. |
?
The bitmap must have been created with the
Unmaps the bitmap from memory.
-The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | One or more arguments are not valid. |
E_POINTER | Pointer is not valid. |
?
Any memory returned from the Map call is now invalid and may be reclaimed by the operating system or used for other purposes.
The bitmap must have been previously mapped.
-Paints an area with a bitmap.
-A bitmap brush is used to fill a geometry with a bitmap. Like all brushes, it defines an infinite plane of content. Because bitmaps are finite, the brush relies on an "extend mode" to determine how the plane is filled horizontally and vertically.
-Gets or sets the method by which the brush horizontally tiles those areas that extend past its bitmap.
-Like all brushes,
Gets or sets the method by which the brush vertically tiles those areas that extend past its bitmap.
-Like all brushes,
Gets or sets the interpolation method used when the brush bitmap is scaled or rotated.
-This method gets the interpolation mode of a bitmap, which is specified by the
The interpolation mode of a bitmap also affects subpixel translations. In a subpixel translation, linear interpolation positions the bitmap more precisely to the application request, but blurs the bitmap in the process.
-Gets or sets the bitmap source that this brush uses to paint.
-Specifies how the brush horizontally tiles those areas that extend past its bitmap.
-A value that specifies how the brush horizontally tiles those areas that extend past its bitmap.
Sometimes, the bitmap for a bitmap brush doesn't completely fill the area being painted. When this happens, Direct2D uses the brush's horizontal (SetExtendModeX) and vertical (SetExtendModeY) extend mode settings to determine how to fill the remaining area.
The following illustration shows the results from every possible combination of the extend modes for an
Specifies how the brush vertically tiles those areas that extend past its bitmap.
-A value that specifies how the brush vertically tiles those areas that extend past its bitmap.
Sometimes, the bitmap for a bitmap brush doesn't completely fill the area being painted. When this happens, Direct2D uses the brush's horizontal (SetExtendModeX) and vertical (SetExtendModeY) extend mode settings to determine how to fill the remaining area.
The following illustration shows the results from every possible combination of the extend modes for an
Specifies the interpolation mode used when the brush bitmap is scaled or rotated.
-The interpolation mode used when the brush bitmap is scaled or rotated.
This method sets the interpolation mode for a bitmap, which is an enum value that is specified in the
The interpolation mode of a bitmap also affects subpixel translations. In a subpixel translation, bilinear interpolation positions the bitmap more precisely to the application requests, but blurs the bitmap in the process.
-Specifies the bitmap source that this brush uses to paint.
-The bitmap source used by the brush.
This method specifies the bitmap source that this brush uses to paint. The bitmap is not resized or rescaled automatically to fit the geometry that it fills. The bitmap stays at its native size. To resize or translate the bitmap, use the SetTransform method to apply a transform to the brush.
The native size of a bitmap is the width and height in bitmap pixels, divided by the bitmap DPI. This native size forms the base tile of the brush. To tile a subregion of the bitmap, you must generate a new bitmap containing this subregion and use SetBitmap to apply it to the brush. -
-Gets the method by which the brush horizontally tiles those areas that extend past its bitmap.
-A value that specifies how the brush horizontally tiles those areas that extend past its bitmap.
Like all brushes,
Gets the method by which the brush vertically tiles those areas that extend past its bitmap.
-A value that specifies how the brush vertically tiles those areas that extend past its bitmap.
Like all brushes,
Gets the interpolation method used when the brush bitmap is scaled or rotated.
-The interpolation method used when the brush bitmap is scaled or rotated.
This method gets the interpolation mode of a bitmap, which is specified by the
The interpolation mode of a bitmap also affects subpixel translations. In a subpixel translation, linear interpolation positions the bitmap more precisely to the application request, but blurs the bitmap in the process.
-Gets the bitmap source that this brush uses to paint.
-When this method returns, contains the address to a reference to the bitmap with which this brush paints.
Paints an area with a bitmap.
-Returns or sets the current interpolation mode of the brush.
-Sets the interpolation mode for the brush.
-The mode to use.
Returns the current interpolation mode of the brush.
-The current interpolation mode.
Describes the pixel format and dpi of a bitmap.
-The bitmap's pixel format and alpha mode.
The horizontal dpi of the bitmap.
The vertical dpi of the bitmap.
This structure allows a
If both dpiX and dpiY are 0, the dpi of the bitmap will be set to the desktop dpi if the device context is a windowed context, or 96 dpi for any other device context.
-Renders to an intermediate texture created by the CreateCompatibleRenderTarget method.
-An
To write directly to a WIC bitmap instead, use the
Retrieves the bitmap for this render target. The returned bitmap can be used for drawing operations.
-The DPI for the
Retrieves the bitmap for this render target. The returned bitmap can be used for drawing operations.
-When this method returns, contains the address of a reference to the bitmap for this render target. This bitmap can be used for drawing operations.
If this method succeeds, it returns
The DPI for the
Provides methods to allow a blend operation to be inserted into a transform graph.
The image output of the blend transform is the same as rendering an image effect graph with these steps:
Gets or sets the blend description of the corresponding blend transform object.
-Changes the blend description of the corresponding blend transform object.
-The new blend description specified for the blend transform.
Gets the blend description of the corresponding blend transform object.
-When this method returns, contains the blend description specified for the blend transform.
Extends the input rectangle to infinity using the specified extend modes.
-Gets or sets the extend mode in the x direction.
-Gets or sets the extend mode in the y direction.
-Sets the extend mode in the x direction.
-The extend mode in the x direction.
If the extend mode enumeration is invalid, this operation is ignored.
-Sets the extend mode in the y direction.
-The extend mode in the y direction.
If the extend mode enumeration is invalid, this operation is ignored.
-Gets the extend mode in the x direction.
-This method returns the extend mode in the x direction.
Gets the extend mode in the y direction.
-This method returns the extend mode in the y direction.
A support transform for effects to modify the output rectangle of the previous effect or bitmap.
-The support transform can be used for two different reasons.
To indicate that a region of its input image is already transparent black. The expanded area will be treated as transparent black.
This can increase efficiency for rendering bitmaps.
To increase the size of the input image.
?
?
-This sets the output bounds for the support transform.
-The output bounds.
Returns the output rectangle of the support transform.
-The output bounds.
Represents a color context that can be used with an
Gets the color space of the color context.
-Gets the size of the color profile associated with the bitmap.
-This can be used to allocate a buffer to receive the color profile bytes associated with the context.
-Gets the color space of the color context.
-This method returns the color space of the contained ICC profile.
Gets the size of the color profile associated with the bitmap.
-This method returns the size of the profile in bytes.
This can be used to allocate a buffer to receive the color profile bytes associated with the context.
-Gets the color profile bytes for an
The method returns an
Description | |
---|---|
No error occurred. | |
The supplied buffer was too small to accomodate the data. |
?
If profileSize is insufficient to store the entire profile, profile is zero-initialized before this method fails.
-This interface performs all the same functions as the
Represents a color context to be used with the Color Management Effect.
-Represents a color context to be used with the Color Management Effect.
-Represents a sequence of commands that can be recorded and played back.
-The command list does not include static copies of resources with the recorded set of commands. All bitmaps, effects, and geometries are stored as references to the actual resource and all the brushes are stored by value. All the resource creation and destruction happens outside of the command list. The following table lists resources and how they are treated inside of a command list.
Resource | How it is treated by the command list |
---|---|
Solid-color brush | Passed by value. |
Bitmap brush | The brush is passed by value but the bitmap that is used to create the brush is in fact referenced. |
Gradient brushes ? both linear and radial gradient | The brush is passed by value but the gradient stop collection itself is referenced. The gradient stop collection object is immutable. |
Bitmaps | Passed by reference. |
Drawing state block | The actual state on the device context is converted into set functions like set transform and is passed by value. |
Geometry | Immutable object passed by value. |
Stroke style | Immutable object passed by value. |
Mesh | Immutable object passed by value. |
?
-Streams the contents of the command list to the specified command sink.
-The sink into which the command list will be streamed.
If the method succeeds, it returns
The return value indicates any failures the command sink implementation returns through its EndDraw method.
The command sink can be implemented by any caller of the API.
If the caller makes any design-time failure calls while a command list is selected as a target, the command list is placed in an error state. The stream call fails without making any calls to the passed in sink.
Sample use:
Class MyCommandSink : public-- { - public: // All of the methods implemented here. - }; - StreamToMyCommandSink( __in *pCommandList ) - { hr = ; MyCommandSink *pCommandSink = new MyCommandSink(); hr = pCommandSink ? : E_OUTOFMEMORY; if (SUCCEEDED(hr)) { // Receive the contents of the command sink streamed to the sink. hr = pCommandList->Stream(pCommandSink); } SafeRelease(&pCommandSink); return hr; }
Instructs the command list to stop accepting commands so that you can use it as an input to an effect or in a call to
The method returns an
Description | |
---|---|
No error occurred. | |
Close has already been called on the command list. |
?
Note??If the device context associated with the command list has an error, the command list returns the same error.?
This method returns
If the Close method returns an error, any future use of the command list results in the same error.
-
The command sink is implemented by you for an application when you want to receive a playback of the commands recorded in a command list. A typical usage will be for transforming the command list into another format such as XPS when some degree of conversion between the Direct2D primitives and the target format is required.
The command sink interface doesn't have any resource creation methods on it. The resources are still logically bound to the Direct2D device on which the command list was created and will be passed in to the command sink implementation.
-The
The
Not all methods implemented by
This interface performs all the same functions as the existing
Enables access to the new primitive blend modes, MIN and ADD.
-This interface performs all the same functions as the existing
Enables access to the new primitive blend modes, MIN and ADD.
-Sets a new primitive blend mode.
-The primitive blend that will apply to subsequent primitives.
If the method succeeds, it returns
This interface performs all the same functions as the existing
This interface performs all the same functions as the existing
Renders the given ink object using the given brush and ink style.
-The ink object to be rendered.
The brush with which to render the ink object.
The ink style to use when rendering the ink object.
This method does not return a value.
Renders a given gradient mesh to the target.
-The gradient mesh to be rendered.
This method does not return a value.
Draws a metafile to the command sink using the given source and destination rectangles.
-The metafile to draw.
The rectangle in the target where the metafile will be drawn, relative to the upper left corner (defined in DIPs). If
The rectangle of the source metafile that will be drawn, relative to the upper left corner (defined in DIPs). If
This method does not return a value.
This interface performs all the same functions as the existing
Renders part or all of the given sprite batch to the device context using the specified drawing options.
-The sprite batch to draw.
The index of the first sprite in the sprite batch to draw.
The number of sprites to draw.
The bitmap from which the sprites are to be sourced. Each sprite?s source rectangle refers to a portion of this bitmap.
The interpolation mode to use when drawing this sprite batch. This determines how Direct2D interpolates pixels within the drawn sprites if scaling is performed.
The additional drawing options, if any, to be used for this sprite batch.
If this method succeeds, it returns
This interface performs all the same functions as the existing
Renders part or all of the given sprite batch to the device context using the specified drawing options.
-The sprite batch to draw.
The index of the first sprite in the sprite batch to draw.
The number of sprites to draw.
The bitmap from which the sprites are to be sourced. Each sprite?s source rectangle refers to a portion of this bitmap.
The interpolation mode to use when drawing this sprite batch. This determines how Direct2D interpolates pixels within the drawn sprites if scaling is performed.
The additional drawing options, if any, to be used for this sprite batch.
If this method succeeds, it returns
Renders part or all of the given sprite batch to the device context using the specified drawing options.
-The sprite batch to draw.
The index of the first sprite in the sprite batch to draw.
The number of sprites to draw.
The bitmap from which the sprites are to be sourced. Each sprite?s source rectangle refers to a portion of this bitmap.
The interpolation mode to use when drawing this sprite batch. This determines how Direct2D interpolates pixels within the drawn sprites if scaling is performed.
The additional drawing options, if any, to be used for this sprite batch.
If this method succeeds, it returns
This interface performs all the same functions as the existing
Sets a new primitive blend mode. Allows access to the MAX primitive blend mode.
-If this method succeeds, it returns
This interface performs all the same functions as the existing
Sets a new primitive blend mode. Allows access to the MAX primitive blend mode.
-If this method succeeds, it returns
The command sink is implemented by you for an application when you want to receive a playback of the commands recorded in a command list. A typical usage will be for transforming the command list into another format such as XPS when some degree of conversion between the Direct2D primitives and the target format is required.
The command sink interface doesn't have any resource creation methods on it. The resources are still logically bound to the Direct2D device on which the command list was created and will be passed in to the command sink implementation.
-The
The
Not all methods implemented by
Notifies the implementation of the command sink that drawing is about to commence.
- This method always returns
Indicates when
If the method/function succeeds, it returns
The
It allows the calling function or method to indicate a failure back to the stream implementation.
-Sets the antialiasing mode that will be used to render any subsequent geometry.
-The antialiasing mode selected for the command list.
If the method succeeds, it returns
Sets the tags that correspond to the tags in the command sink.
-The first tag to associate with the primitive.
The second tag to associate with the primitive.
If the method succeeds, it returns
Indicates the new default antialiasing mode for text.
-The antialiasing mode for the text.
If the method succeeds, it returns
Indicates more detailed text rendering parameters.
-The parameters to use for text rendering.
If the method succeeds, it returns
Sets a new transform.
-The transform to be set.
If the method succeeds, it returns
The transform will be applied to the corresponding device context.
-Sets a new primitive blend mode.
-The primitive blend that will apply to subsequent primitives.
If the method succeeds, it returns
The unit mode changes the meaning of subsequent units from device-independent pixels (DIPs) to pixels or the other way. The command sink does not record a DPI, this is implied by the playback context or other playback interface such as
If the method succeeds, it returns
The unit mode changes the interpretation of units from DIPs to pixels or vice versa.
-Clears the drawing area to the specified color.
-The color to which the command sink should be cleared.
If the method succeeds, it returns
The clear color is restricted by the currently selected clip and layer bounds.
If no color is specified, the color should be interpreted by context. Examples include but are not limited to:
Indicates the glyphs to be drawn.
-The upper left corner of the baseline.
The glyphs to render.
Additional non-rendering information about the glyphs.
The brush used to fill the glyphs.
The measuring mode to apply to the glyphs.
If the method succeeds, it returns
DrawText and DrawTextLayout are broken down into glyph runs and rectangles by the time the command sink is processed. So, these methods aren't available on the command sink. Since the application may require additional callback processing when calling DrawTextLayout, this semantic can't be easily preserved in the command list.
-Draws a line drawn between two points.
-The start point of the line.
The end point of the line.
The brush used to fill the line.
The width of the stroke to fill the line.
The style of the stroke. If not specified, the stroke is solid.
If the method succeeds, it returns
Indicates the geometry to be drawn to the command sink.
-The geometry to be stroked.
The brush that will be used to fill the stroked geometry.
The width of the stroke.
The style of the stroke.
An
Ellipses and rounded rectangles are converted to the corresponding ellipse and rounded rectangle geometries before calling into the DrawGeometry method. -
-Draws a rectangle.
-The rectangle to be drawn to the command sink.
The brush used to stroke the geometry.
The width of the stroke.
The style of the stroke.
If the method succeeds, it returns
Draws a bitmap to the render target.
-The bitmap to draw.
The destination rectangle. The default is the size of the bitmap and the location is the upper left corner of the render target.
The opacity of the bitmap.
The interpolation mode to use.
An optional source rectangle.
An optional perspective transform.
This method does not return a value.
The destinationRectangle parameter defines the rectangle in the target where the bitmap will appear (in device-independent pixels (DIPs)). This is affected by the currently set transform and the perspective transform, if set. If you specify
The sourceRectangle defines the sub-rectangle of the source bitmap (in DIPs). DrawBitmap clips this rectangle to the size of the source bitmap, so it's impossible to sample outside of the bitmap. If you specify
The perspectiveTransform is specified in addition to the transform on device context. -
-Draws the provided image to the command sink.
-The image to be drawn to the command sink.
This defines the offset in the destination space that the image will be rendered to. The entire logical extent of the image will be rendered to the corresponding destination. If not specified, the destination origin will be (0, 0). The top-left corner of the image will be mapped to the target offset. This will not necessarily be the origin.
The corresponding rectangle in the image space will be mapped to the provided origins when processing the image.
The interpolation mode to use to scale the image if necessary.
If specified, the composite mode that will be applied to the limits of the currently selected clip.
If the method succeeds, it returns
Because the image can itself be a command list or contain an effect graph that in turn contains a command list, this method can result in recursive processing.
-Draw a metafile to the device context.
-The metafile to draw.
The offset from the upper left corner of the render target.
This method does not return a value.
The targetOffset defines the offset in the destination space that the image will be rendered to. The entire logical extent of the image is rendered to the corresponding destination. If you don't specify the offset, the destination origin will be (0, 0). The top, left corner of the image will be mapped to the target offset. This will not necessarily be the origin. -
-Indicates a mesh to be filled by the command sink.
-The mesh object to be filled.
The brush with which to fill the mesh.
If the method succeeds, it returns
Fills an opacity mask on the command sink.
-The bitmap whose alpha channel will be sampled to define the opacity mask.
The brush with which to fill the mask.
The destination rectangle in which to fill the mask. If not specified, this is the origin.
The source rectangle within the opacity mask. If not specified, this is the entire mask.
If the method succeeds, it returns
The opacity mask bitmap must be considered to be clamped on each axis.
-Indicates to the command sink a geometry to be filled.
-The geometry that should be filled.
The primary brush used to fill the geometry.
A brush whose alpha channel is used to modify the opacity of the primary fill brush.
If the method succeeds, it returns
If the opacity brush is specified, the primary brush will be a bitmap brush fixed on both the x-axis and the y-axis.
Ellipses and rounded rectangles are converted to the corresponding geometry before being passed to FillGeometry.
-Indicates to the command sink a rectangle to be filled.
-The rectangle to fill.
The brush with which to fill the rectangle.
If the method succeeds, it returns
Pushes a clipping rectangle onto the clip and layer stack.
-The rectangle that defines the clip.
The antialias mode for the clip.
If the method succeeds, it returns
If the current world transform is not preserving the axis, clipRectangle is transformed and the bounds of the transformed rectangle are used instead.
-Pushes a layer onto the clip and layer stack.
-The parameters that define the layer.
The layer resource that receives subsequent drawing operations.
If the method succeeds, it returns
Removes an axis-aligned clip from the layer and clip stack.
-If the method succeeds, it returns
Removes a layer from the layer and clip stack.
-If the method succeeds, it returns
Enables specification of information for a compute-shader rendering pass.
-The transform changes the state on this render information to specify the compute shader and its dependent resources.
-Establishes or changes the constant buffer data for this transform.
-The data applied to the constant buffer.
The number of bytes of data in the constant buffer.
If this method succeeds, it returns
Sets the compute shader to the given shader resource. The resource must be loaded before this call is made.
-The
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Sets the resource texture corresponding to the given shader texture index to the given texture resource. The texture resource must already have been loaded with
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
This method allows a compute-shader?based transform to select the number of thread groups to execute based on the number of output pixels it needs to fill.
-If this call fails, the corresponding
This method allows a compute-shader?based transform to select the number of thread groups to execute based on the number of output pixels it needs to fill.
-If this call fails, the corresponding
Sets the render information used to specify the compute shader pass.
-The render information object to set.
If the method succeeds, it returns
If this method fails,
This method allows a compute-shader?based transform to select the number of thread groups to execute based on the number of output pixels it needs to fill.
-The output rectangle that will be filled by the compute transform.
The number of threads in the x dimension.
The number of threads in the y dimension.
The number of threads in the z dimension.
If the method succeeds, it returns
If this call fails, the corresponding
Allows a custom effect's interface and behavior to be specified by the effect author.
-This interface is created by the effect author from a static factory registered through the ID2D1Factory::RegisterEffect method.
-Allows a custom effect's interface and behavior to be specified by the effect author.
-This interface is created by the effect author from a static factory registered through the ID2D1Factory::RegisterEffect method.
-The effect can use this method to do one time initialization tasks. If this method is not needed, the method can just return
An internal context interface that creates and returns effect author?centric types.
The effect can populate the transform graph with a topology and can update it later.
If the method succeeds, it returns
This moves resource creation cost to the CreateEffect call, rather than during rendering.
If the implementation fails this call, the corresponding
The following example shows an effect implementing an initialize method.
-Prepares an effect for the rendering process.
-Indicates the type of change the effect should expect.
If the method succeeds, it returns
This method is called by the renderer when the effect is within an effect graph that is drawn.
The method will be called:
The method will not otherwise be called. The transforms created by the effect will be called to handle their input and output rectangles for every draw call.
Most effects defer creating any resources or specifying a topology until this call is made. They store their properties and map them to a concrete set of rendering techniques when first drawn.
-The renderer calls this method to provide the effect implementation with a way to specify its transform graph and transform graph changes.
The renderer calls this method when:
The graph to which the effect describes its transform topology through the SetDescription call.
An error that prevents the effect from being initialized if called as part of the CreateEffect call. If the effect fails a subsequent SetGraph call:
Defines a vertex shader and the input element description to define the input layout. The combination is used to allow a custom vertex effect to create a custom vertex shader and pass it a custom layout.
-The vertex shader will be loaded by the CreateVertexBuffer call that accepts the vertex buffer properties.
This structure does not need to be specified if one of the standard vertex shaders is used.
-The unique ID of the vertex shader.
An array of input assembler stage data types.
An array of input assembler stage data types.
The number of input elements in the vertex shader.
The vertex stride.
Creates a factory object that can be used to create Direct2D resources.
-The threading model of the factory and the resources it creates.
A reference to the IID of
The level of detail provided to the debugging layer.
When this method returns, contains the address to a reference to the new factory.
If this function succeeds, it returns
The
Creates a rotation transformation that rotates by the specified angle about the specified point.
-The clockwise rotation angle, in degrees.
The point about which to rotate.
When this method returns, contains the new rotation transformation. You must allocate storage for this parameter.
Rotation occurs in the plane of the 2-D surface.
-Creates a skew transformation that has the specified x-axis angle, y-axis angle, and center point.
-The x-axis skew angle, which is measured in degrees counterclockwise from the y-axis.
The y-axis skew angle, which is measured in degrees counterclockwise from the x-axis.
The center point of the skew operation.
When this method returns, contains the rotation transformation. You must allocate storate for this parameter.
Indicates whether the specified matrix is invertible.
-The matrix to test.
true if the matrix was inverted; otherwise, false.
Tries to invert the specified matrix.
-The matrix to invert.
true if the matrix was inverted; otherwise, false.
Creates a new Direct2D device associated with the provided DXGI device.
-The DXGI device the Direct2D device is associated with.
The properties to apply to the Direct2D device.
When this function returns, contains the address of a reference to a Direct2D device.
The function returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This function will also create a new
If the creation properties are not specified, then d2dDevice will inherit its threading mode from dxgiDevice and debug tracing will not be enabled.
-Creates a new Direct2D device context associated with a DXGI surface.
-The DXGI surface the Direct2D device context is associated with.
The properties to apply to the Direct2D device context.
When this function returns, contains the address of a reference to a Direct2D device context.
The function returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This function will also create a new
This function will also create a new
The DXGI device will be specified implicitly through dxgiSurface.
If creationProperties are not specified, the Direct2D device will inherit its threading mode from the DXGI device implied by dxgiSurface and debug tracing will not be enabled.
-Converts the given color from one colorspace to another.
-The source color space.
The destination color space.
The source color.
The converted color.
Returns the sine and cosine of an angle.
-The angle to calculate.
The sine of the angle.
The cosine of the angle.
Returns the tangent of an angle.
-The angle to calculate the tangent for.
The tangent of the angle.
Returns the length of a 3 dimensional vector.
-The x value of the vector.
The y value of the vector.
The z value of the vector.
The length of the vector.
Computes the maximum factor by which a given transform can stretch any vector.
-The input transform matrix.
The scale factor.
Formally, if M is the input matrix, this method will return the maximum value of |V * M| / |V| for all vectors V, where |.| denotes length.
Note??Since this describes how M affects vectors (rather than points), the translation components (_31 and _32) of M are ignored.? -Returns the interior points for a gradient mesh patch based on the points defining a Coons patch.
Note??This function is called by the GradientMeshPatchFromCoonsPatch function and is not intended to be used directly.
? -This function is called by the GradientMeshPatchFromCoonsPatch function and is not intended to be used directly.
-Represents a resource domain whose objects and device contexts can be used together.
-Sets the maximum amount of texture memory Direct2D accumulates before it purges the image caches and cached texture allocations.
-Creates a new device context from a Direct2D device.
-The options to be applied to the created device context.
When this method returns, contains the address of a reference to the new device context.
If the method succeeds, it returns
The new device context will not have a selected target bitmap. The caller must create and select a bitmap as the target surface of the context.
-Creates an
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_FAIL | Generic failure code. |
The print format is not supported by the document target. |
?
Sets the maximum amount of texture memory Direct2D accumulates before it purges the image caches and cached texture allocations.
-The new maximum texture memory in bytes.
Sets the maximum amount of texture memory Direct2D accumulates before it purges the image caches and cached texture allocations.
-The maximum amount of texture memory in bytes.
Clears all of the rendering resources used by Direct2D.
-Discards only resources that haven't been used for greater than the specified time in milliseconds. The default is 0 milliseconds.
Represents a resource domain whose objects and device contexts can be used together. This interface performs all the same functions as the existing
Retrieves or sets the current rendering priority of the device.
-Retrieves the current rendering priority of the device.
-The current rendering priority of the device.
Sets the priority of Direct2D rendering operations performed on any device context associated with the device.
-The desired rendering priority for the device and associated contexts.
Calling this method affects the rendering priority of all device contexts associated with the device. This method can be called at any time, but is not guaranteed to take effect until the beginning of the next frame. The recommended usage is to call this method outside of BeginDraw and EndDraw blocks. Cycling this property frequently within drawing blocks will effectively reduce the benefits of any throttling that is applied.
-Represents a resource domain whose objects and device contexts can be used together. This interface performs all the same functions as the existing
Represents a resource domain whose objects and device contexts can be used together. This interface performs all the same functions as the existing
Returns the DXGI device associated with this Direct2D device.
-Creates a new
If this method succeeds, it returns
Flush all device contexts that reference a given bitmap.
-The bitmap, created on this device, for which all referencing device contexts will be flushed.
Returns the DXGI device associated with this Direct2D device.
-The DXGI device associated with this Direct2D device.
If this method succeeds, it returns
Represents a resource domain whose objects and device contexts can be used together. This interface performs all the same functions as the
Creates a new
If this method succeeds, it returns
Represents a resource domain whose objects and device contexts can be used together. This interface performs all the same functions as the
Gets or sets the maximum capacity of the color glyph cache.
-Creates a new device context from a Direct2D device.
-The options to be applied to the created device context.
When this method returns, contains the address of a reference to the new device context.
If the method succeeds, it returns
The new device context will not have a selected target bitmap. The caller must create and select a bitmap as the target surface of the context.
-Sets the maximum capacity of the color glyph cache.
-The maximum capacity of the color glyph cache.
The color glyph cache is used to store color bitmap glyphs and SVG glyphs, enabling faster performance if the same glyphs are needed again. The capacity determines the amount of memory that D2D may use to store glyphs that the application does not already reference. If the application references a glyph using GetColorBitmapGlyphImage or GetSvgGlyphImage, after it has been evicted, this glyph does not count toward the cache capacity.
-Gets the maximum capacity of the color glyph cache.
-Returns the maximum capacity of the color glyph cache in bytes.
Represents a resource domain whose objects and device contexts can be used together.
-Represents a set of state and command buffers that are used to render to a target.
The device context can render to a target bitmap or a command list. -
-Any resource created from a device context can be shared with any other resource created from a device context when both contexts are created on the same device.
-Gets the device associated with a device context.
-The application can retrieve the device even if it is created from an earlier render target code-path. The application must use an
Gets or sets the target currently associated with the device context.
-If a target is not associated with the device context, target will contain
If the currently selected target is a bitmap rather than a command list, the application can gain access to the initial bitmaps created by using one of the following methods:
It is not possible for an application to destroy these bitmaps. All of these bitmaps are bindable as bitmap targets. However not all of these bitmaps can be used as bitmap sources for
CreateDxgiSurfaceRenderTarget will create a bitmap that is usable as a bitmap source if the DXGI surface is bindable as a shader resource view.
CreateCompatibleRenderTarget will always create bitmaps that are usable as a bitmap source.
Direct2D will only lock bitmaps that are not currently locked.
Calling QueryInterface for
Although the target can be a command list, it cannot be any other type of image. It cannot be the output image of an effect.
-Gets or sets the rendering controls that have been applied to the context.
-Returns or sets the currently set primitive blend used by the device context.
-Gets or sets the mode that is being used to interpret values by the device context.
-Creates a bitmap that can be used as a target surface, for reading back to the CPU, or as a source for the DrawBitmap and
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
?
The new bitmap can be used as a target for SetTarget if it is created with
Creates a Direct2D bitmap by copying a WIC bitmap.
-The WIC bitmap source to copy from.
A bitmap properties structure that specifies bitmap creation options.
The address of the newly created bitmap object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Starting with Windows?8.1, the bitmapProperties parameter is optional. When it is not specified, the created bitmap inherits the pixel format and alpha mode from wicBitmapSource. For a list of supported pixel formats and alpha modes, see Supported Pixel Formats and Alpha Modes.
When the bitmapProperties parameter is specified, the value in bitmapProperties->pixelFormat must either be
When bitmapProperties->pixelFormat.alphaMode is set to
Creates a color context.
-The space of color context to create.
A buffer containing the ICC profile bytes used to initialize the color context when space is
The size in bytes of Profile.
When this method returns, contains the address of a reference to a new color context object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
The new color context can be used in
When space is
Creates a color context by loading it from the specified filename. The profile bytes are the contents of the file specified by Filename.
-The path to the file containing the profile bytes to initialize the color context with.
When this method returns, contains the address of a reference to a new color context.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
The new color context can be used in
Creates a color context from an
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
The new color context can be used in
Creates a bitmap from a DXGI surface that can be set as a target surface or have additional color context information specified.
-The DXGI surface from which the bitmap can be created.
Note??The DXGI surface must have been created from the same Direct3D device that the Direct2D device context is associated with. ?The bitmap properties specified in addition to the surface.
When this method returns, contains the address of a reference to a new bitmap object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
?
If the bitmap properties are not specified, the following information is assumed:
If the bitmap properties are specified, the bitmap properties will be used as follows:
Creates an effect for the specified class ID.
-The class ID of the effect to create. See Built-in Effects for a list of effect IDs.
When this method returns, contains the address of a reference to a new effect.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
The specified effect is not registered by the system. | |
The effect requires capabilities not supported by the D2D device. |
?
If the created effect is a custom effect that is implemented in a DLL, this doesn't increment the reference count for that DLL. If the application deletes an effect while that effect is loaded, the resulting behavior is unpredictable.
-Creates a gradient stop collection, enabling the gradient to contain color channels with values outside of [0,1] and also enabling rendering to a high-color render target with interpolation in sRGB space.
-An array of color values and offsets.
The number of elements in the gradientStops array.
Specifies both the input color space and the space in which the color interpolation occurs.
The color space that colors will be converted to after interpolation occurs.
The precision of the texture used to hold interpolated values.
Note??This method will fail if the underlying Direct3D device does not support the requested buffer precision. UseDefines how colors outside of the range defined by the stop collection are determined.
Defines how colors are interpolated.
The new gradient stop collection.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This method linearly interpolates between the color stops. An optional color space conversion is applied post-interpolation. Whether and how this gamma conversion is applied is determined by the pre- and post-interpolation. This method will fail if the device context does not support the requested buffer precision.
In order to get the desired result, you need to ensure that the inputs are specified in the correct color space. -
You must always specify colors in straight alpha, regardless of interpolation mode being premultiplied or straight. The interpolation mode only affects the interpolated values. Likewise, the stops returned by
If you specify
Starting with Windows?8, the interpolation behavior of this method has changed.
The table here shows the behavior in Windows?7 and earlier.
Gamma | Before Interpolation Behavior | After Interpolation Behavior | GetColorInteroplationGamma - (output color space) - |
---|---|---|---|
1.0 | Clamps the inputs and then converts from sRGB to scRGB. | Converts from scRGB to sRGB post-interpolation. | 1.0 |
2.2 | Clamps the inputs. | No Operation | 2.2 |
?
The table here shows the behavior in Windows?8 and later.
Gamma | Before Interpolation Behavior | After Interpolation Behavior | GetColorInteroplationGamma - (output color space) - |
---|---|---|---|
sRGB to scRGB | No Operation | Clamps the outputs and then converts from sRGB to scRGB. | 1.0 |
scRGB to sRGB | No Operation | Clamps the outputs and then converts from sRGB to scRGB. | 2.2 |
sRGB to sRGB | No Operation | No Operation | 2.2 |
scRGB to scRGB | No Operation | No Operation | 1.0 |
?
-Creates an image brush. The input image can be any type of image, including a bitmap, effect, or a command list. -
-The image to be used as a source for the image brush.
The properties specific to an image brush.
Properties common to all brushes.
When this method returns, contains the address of a reference to the input rectangles.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
The image brush can be used to fill an arbitrary geometry, an opacity mask or text.
This sample illustrates drawing a rectangle with an image brush.
-- CreatePatternBrush( __in *pDeviceContext, __deref_out **ppImageBrush ) - { hr = ; *pOldTarget = null ; pDeviceContext->GetTarget(&pOldTarget);*pCommandList = null ; hr = pDeviceContext->CreateCommandList(&pCommandList); if (SUCCEEDED(hr)) { pDeviceContext->SetTarget(pCommandList); hr = RenderPatternToCommandList(pDeviceContext); } pDeviceContext->SetTarget(pOldTarget);*pImageBrush = null ; if (SUCCEEDED(hr)) { hr = pDeviceContext->CreateImageBrush( pCommandList, D2D1::ImageBrushProperties( D2D1::RectF(198, 298, 370, 470),, , ), &pImageBrush ); } // Fill a rectangle with the image brush. if (SUCCEEDED(hr)) { pDeviceContext->FillRectangle( D2D1::RectF(0, 0, 100, 100), pImageBrush); } SafeRelease(&pImageBrush); SafeRelease(&pCommandList); SafeRelease(&pOldTarget); return hr; - }
Creates a bitmap brush, the input image is a Direct2D bitmap object.
-The bitmap to use as the brush.
A bitmap brush properties structure.
A brush properties structure.
The address of the newly created bitmap brush object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Creates a
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
?
A
Indicates whether the format is supported by the device context. The formats supported are usually determined by the underlying hardware.
-The DXGI format to check.
Returns TRUE if the format is supported. Returns
You can use supported formats in the
Indicates whether the buffer precision is supported by the underlying Direct3D device.
-Returns TRUE if the buffer precision is supported. Returns
Gets the bounds of an image without the world transform of the context applied.
-The image whose bounds will be calculated.
When this method returns, contains a reference to the bounds of the image in device independent pixels (DIPs) and in local space.
The image bounds don't include multiplication by the world transform. They do reflect the current DPI, unit mode, and interpolation mode of the context. To get the bounds that include the world transform, use
The returned bounds reflect which pixels would be impacted by calling DrawImage with a target offset of (0,0) and an identity world transform matrix. They do not reflect the current clip rectangle set on the device context or the extent of the context's current target image.
-Gets the bounds of an image with the world transform of the context applied.
-The image whose bounds will be calculated.
When this method returns, contains a reference to the bounds of the image in device independent pixels (DIPs).
The image bounds reflect the current DPI, unit mode, and world transform of the context. To get bounds which don't include the world transform, use
The returned bounds reflect which pixels would be impacted by calling DrawImage with the same image and a target offset of (0,0). They do not reflect the current clip rectangle set on the device context or the extent of the context?s current target image. -
-Gets the world-space bounds in DIPs of the glyph run using the device context DPI.
-The origin of the baseline for the glyph run.
The glyph run to render.
The DirectWrite measuring mode that indicates how glyph metrics are used to measure text when it is formatted.
The bounds of the glyph run in DIPs and in world space.
The image bounds reflect the current DPI, unit mode, and world transform of the context.
-Gets the device associated with a device context.
-When this method returns, contains the address of a reference to a Direct2D device associated with this device context.
The application can retrieve the device even if it is created from an earlier render target code-path. The application must use an
The bitmap or command list to which the Direct2D device context will now render.
-The target can be changed at any time, including while the context is drawing.
The target can be either a bitmap created with the
You cannot use SetTarget to render to a bitmap/command list from multiple device contexts simultaneously. An image is considered ?being rendered to? if it has ever been set on a device context within a BeginDraw/EndDraw timespan. If an attempt is made to render to an image through multiple device contexts, all subsequent device contexts after the first will enter an error state.
Callers wishing to attach an image to a second device context should first call EndDraw on the first device context. -
Here is an example of the correct calling order.
pDC1->BeginDraw(); - pDC1->SetTarget(pImage); - // ? - pDC1->EndDraw(); pDC2->BeginDraw(); - pDC2->SetTarget(pImage); - // ? - pDC2->EndDraw(); -
Here is an example of the incorrect calling order.
pDC1->BeginDraw(); - pDC2->BeginDraw(); pDC1->SetTarget(pImage); // ... pDC1->SetTarget(Note??Changing the target does not change the bitmap that annull ); pDC2->SetTarget(pImage); // This call is invalid, even though pImage is no longer set on pDC1. // ... pDC1->EndDraw(); // This EndDraw SUCCEEDs. - pDC2->EndDraw(); // This EndDraw FAILs
This API makes it easy for an application to use a bitmap as a source (like in DrawBitmap) and as a destination at the same time. Attempting to use a bitmap as a source on the same device context to which it is bound as a target will put the device context into the
It is acceptable to have a bitmap bound as a target bitmap on multiple render targets at once. Applications that do this must properly synchronize rendering with Flush or EndDraw.
You can change the target at any time, including while the context is drawing.
You can set the target to
If the device context has an outstanding
If the bitmap and the device context are not in the same resource domain, the context will enter \ error state. The target will not be changed.
Gets the target currently associated with the device context.
-When this method returns, contains the address of a reference to the target currently associated with the device context.
If a target is not associated with the device context, target will contain
If the currently selected target is a bitmap rather than a command list, the application can gain access to the initial bitmaps created by using one of the following methods:
It is not possible for an application to destroy these bitmaps. All of these bitmaps are bindable as bitmap targets. However not all of these bitmaps can be used as bitmap sources for
CreateDxgiSurfaceRenderTarget will create a bitmap that is usable as a bitmap source if the DXGI surface is bindable as a shader resource view.
CreateCompatibleRenderTarget will always create bitmaps that are usable as a bitmap source.
Direct2D will only lock bitmaps that are not currently locked.
Calling QueryInterface for
Although the target can be a command list, it cannot be any other type of image. It cannot be the output image of an effect.
-Sets the rendering controls for the given device context.
-The rendering controls to be applied.
The rendering controls allow the application to tune the precision, performance, and resource usage of rendering operations.
-Gets the rendering controls that have been applied to the context.
-When this method returns, contains a reference to the rendering controls for this context.
Changes the primitive blend mode that is used for all rendering operations in the device context.
-The primitive blend to use.
The primitive blend will apply to all of the primitive drawn on the context, unless this is overridden with the compositeMode parameter on the DrawImage API.
The primitive blend applies to the interior of any primitives drawn on the context. In the case of DrawImage, this will be implied by the image rectangle, offset and world transform.
If the primitive blend is anything other than
Returns the currently set primitive blend used by the device context.
-The current primitive blend. The default value is
Sets what units will be used to interpret values passed into the device context.
-An enumeration defining how passed-in units will be interpreted by the device context.
This method will affect all properties and parameters affected by SetDpi and GetDpi. This affects all coordinates, lengths, and other properties that are not explicitly defined as being in another unit. For example:
Gets the mode that is being used to interpret values by the device context.
-The unit mode.
Draws a series of glyphs to the device context.
-Origin of first glyph in the series.
The glyphs to render.
Supplementary glyph series information.
The brush that defines the text color.
The measuring mode of the glyph series, used to determine the advances and offsets. The default value is
The glyphRunDescription is ignored when rendering, but can be useful for printing and serialization of rendering commands, such as to an XPS or SVG file. This extends
A command list cannot reference effects which are part of effect graphs that consume the command list.
-Draw a metafile to the device context.
-The metafile to draw.
The offset from the upper left corner of the render target.
Draws a bitmap to the render target.
-The bitmap to draw.
The destination rectangle. The default is the size of the bitmap and the location is the upper left corner of the render target.
The opacity of the bitmap.
The interpolation mode to use.
An optional source rectangle.
An optional perspective transform.
The destinationRectangle parameter defines the rectangle in the target where the bitmap will appear (in device-independent pixels (DIPs)). This is affected by the currently set transform and the perspective transform, if set. If
The sourceRectangle parameter defines the sub-rectangle of the source bitmap (in DIPs). DrawBitmap will clip this rectangle to the size of the source bitmap, thus making it impossible to sample outside of the bitmap. If
If you specify perspectiveTransform it is applied to the rect in addition to the transform set on the render target.
-Push a layer onto the clip and layer stack of the device context.
-The parameters that defines the layer.
The layer resource to push on the device context that receives subsequent drawing operations.
Note??If a layer is not specified, Direct2D manages the layer resource automatically. ?This indicates that a portion of an effect's input is invalid. This method can be called many times.
You can use this method to propagate invalid rectangles through an effect graph. You can query Direct2D using the GetEffectInvalidRectangles method.
Note??Direct2D does not automatically use these invalid rectangles to reduce the region of an effect that is rendered.?You can also use this method to invalidate caches that have accumulated while rendering effects that have the
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Gets the number of invalid output rectangles that have accumulated on the effect.
-The effect to count the invalid rectangles on.
The returned rectangle count.
Gets the invalid rectangles that have accumulated since the last time the effect was drawn and EndDraw was then called on the device context.
-The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Note??Direct2D does not automatically use these invalid rectangles to reduce the region of an effect that is rendered.?
You can use the InvalidateEffectInputRectangle method to specify invalidated rectangles for Direct2D to propagate through an effect graph.
If multiple invalid rectangles are requested, the rectangles that this method returns may overlap. When this is the case, the rectangle count might be lower than the count that GetEffectInvalidRectangleCount.
-Returns the input rectangles that are required to be supplied by the caller to produce the given output rectangle.
-The image whose output is being rendered.
The portion of the output image whose inputs are being inspected.
A list of the inputs whos rectangles are being queried.
The input rectangles returned to the caller.
The number of inputs.
A failure code, this will typically only be because an effect in the chain returned some error.
The caller should be very careful not to place a reliance on the required input rectangles returned. Small changes for correctness to an effect's behavior can result in different rectangles being returned. In addition, different kinds of optimization applied inside the render can also influence the result.
-Fill using the alpha channel of the supplied opacity mask bitmap. The brush opacity will be modulated by the mask. The render target antialiasing mode must be set to aliased.
-The bitmap that acts as the opacity mask
The brush to use for filling the primitive.
The destination rectangle to output to in the render target
The source rectangle from the opacity mask bitmap.
Enables creation and drawing of geometry realization objects.
-Creates a device-dependent representation of the fill of the geometry that can be subsequently rendered.
-The geometry to realize.
The flattening tolerance to use when converting Beziers to line segments. This parameter shares the same units as the coordinates of the geometry.
When this method returns, contains the address of a reference to a new geometry realization object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This method is used in conjunction with
If the provided stroke style specifies a stroke transform type other than
Creates a device-dependent representation of the stroke of a geometry that can be subsequently rendered.
-The geometry to realize.
The flattening tolerance to use when converting Beziers to line segments. This parameter shares the same units as the coordinates of the geometry.
The width of the stroke. This parameter shares the same units as the coordinates of the geometry.
The stroke style (optional).
When this method returns, contains the address of a reference to a new geometry realization object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This method is used in conjunction with
If the provided stroke style specifies a stroke transform type other than
Renders a given geometry realization to the target with the specified brush.
-The geometry realization to be rendered.
The brush to render the realization with.
This method respects all currently set state (transform, DPI, unit mode, target image, clips, layers); however, artifacts such as faceting may appear when rendering the realizations with a large effective scale (either via the transform or the DPI). Callers should create their realizations with an appropriate flattening tolerance using either D2D1_DEFAULT_FLATTENING_TOLERANCE or ComputeFlatteningTolerance to compensate for this.
Additionally, callers should be aware of the safe render bounds when creating geometry realizations. If a geometry extends outside of [-524,287, 524,287] DIPs in either the X- or the Y- direction in its original (pre-transform) coordinate space, then it may be clipped to those bounds when it is realized. This clipping will be visible even if the realization is subsequently transformed to fit within the safe render bounds.
-This interface performs all the same functions as the
Creates a new
Creates a new
Creates a new
Creates an image source object from a WIC bitmap source, while populating all pixel memory within the image source. The image is loaded and stored while using a minimal amount of memory.
-The WIC bitmap source to create the image source from.
Options for creating the image source. Default options are used if
Receives the new image source instance.
Receives the new image source instance.
This method creates an image source which can be used to draw the image.
This method supports images that exceed the maximum texture size. Large images are internally stored within a sparse tile cache.
This API supports the same set of pixel formats and alpha modes supported by CreateBitmapFromWicBitmap. If the GPU does not support a given pixel format, this method will return
This method automatically selects an appropriate storage format to minimize GPU memory usage., such as using separate luminance and chrominance textures for JPEG images.
If the loadingOptions argument is
Creates a 3D lookup table for mapping a 3-channel input to a 3-channel output. The table data must be provided in 4-channel format.
-Precision of the input lookup table data.
Number of lookup table elements per dimension (X, Y, Z).
Buffer holding the lookup table data.
Size of the lookup table data buffer.
An array containing two values. The first value is the size in bytes from one row (X dimension) of LUT data to the next. The second value is the size in bytes from one LUT data plane (X and Y dimensions) to the next.
Receives the new lookup table instance.
Creates an image source from a set of DXGI surface(s). The YCbCr surface(s) are converted to RGBA automatically during subsequent drawing.
-The DXGI surfaces to create the image source from.
The number of surfaces provided; must be between one and three.
The color space of the input.
Options controlling color space conversions.
Receives the new image source instance.
This method creates an image source which can be used to draw the image. This method supports surfaces that use a limited set of DXGI formats and DXGI color space types. Only the below set of combinations of color space types, surface formats, and surface counts are supported:
Color Space Type | Surface Count(s) | Surface Format(s) |
---|---|---|
1 | Standard D2D-supported pixel formats: | |
1, 2, 3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
| |
| 1,2,3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
|
?
The GPU must also have sufficient support for a pixel format to be supported by D2D. To determine whether D2D supports a format, call IsDxgiFormatSupported.
This API converts YCbCr formats to sRGB using the provided color space type and options. RGBA data is assumed to be in the desired space, and D2D does not apply any conversion.
If multiple surfaces are provided, this method infers whether chroma planes are subsampled (by 2x) from the relative sizes of each corresponding source rectangle (or if the source rectangles parameter is
If provided, the source rectangles must be within the bounds of the corresponding surface. The source rectangles may have different origins. In this case, this method shifts the data from each plane to align with one another.
-Creates an image source from a set of DXGI surface(s). The YCbCr surface(s) are converted to RGBA automatically during subsequent drawing.
-The DXGI surfaces to create the image source from.
The number of surfaces provided; must be between one and three.
The color space of the input.
Options controlling color space conversions.
Receives the new image source instance.
This method creates an image source which can be used to draw the image. This method supports surfaces that use a limited set of DXGI formats and DXGI color space types. Only the below set of combinations of color space types, surface formats, and surface counts are supported:
Color Space Type | Surface Count(s) | Surface Format(s) |
---|---|---|
1 | Standard D2D-supported pixel formats: | |
1, 2, 3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
| |
| 1,2,3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
|
?
The GPU must also have sufficient support for a pixel format to be supported by D2D. To determine whether D2D supports a format, call IsDxgiFormatSupported.
This API converts YCbCr formats to sRGB using the provided color space type and options. RGBA data is assumed to be in the desired space, and D2D does not apply any conversion.
If multiple surfaces are provided, this method infers whether chroma planes are subsampled (by 2x) from the relative sizes of each corresponding source rectangle (or if the source rectangles parameter is
If provided, the source rectangles must be within the bounds of the corresponding surface. The source rectangles may have different origins. In this case, this method shifts the data from each plane to align with one another.
-Creates an image source from a set of DXGI surface(s). The YCbCr surface(s) are converted to RGBA automatically during subsequent drawing.
-The DXGI surfaces to create the image source from.
The number of surfaces provided; must be between one and three.
The color space of the input.
Options controlling color space conversions.
Receives the new image source instance.
This method creates an image source which can be used to draw the image. This method supports surfaces that use a limited set of DXGI formats and DXGI color space types. Only the below set of combinations of color space types, surface formats, and surface counts are supported:
Color Space Type | Surface Count(s) | Surface Format(s) |
---|---|---|
1 | Standard D2D-supported pixel formats: | |
1, 2, 3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
| |
| 1,2,3 | When Surface count is 1: When Surface Count is 2:
When Surface Count is 3:
|
?
The GPU must also have sufficient support for a pixel format to be supported by D2D. To determine whether D2D supports a format, call IsDxgiFormatSupported.
This API converts YCbCr formats to sRGB using the provided color space type and options. RGBA data is assumed to be in the desired space, and D2D does not apply any conversion.
If multiple surfaces are provided, this method infers whether chroma planes are subsampled (by 2x) from the relative sizes of each corresponding source rectangle (or if the source rectangles parameter is
If provided, the source rectangles must be within the bounds of the corresponding surface. The source rectangles may have different origins. In this case, this method shifts the data from each plane to align with one another.
-Returns the world bounds of a given gradient mesh.
-The gradient mesh whose world bounds will be calculated.
When this method returns, contains a reference to the bounds of the gradient mesh, in device independent pixels (DIPs).
The world bounds reflect the current DPI, unit mode, and world transform of the context. They indicate which pixels would be impacted by calling DrawGradientMesh with the given gradient mesh. - They do not reflect the current clip rectangle set on the device context or the extent of the context?s current target.
-Renders the given ink object using the given brush and ink style.
-The ink object to be rendered.
The brush with which to render the ink object.
The ink style to use when rendering the ink object.
Renders a given gradient mesh to the target.
-The gradient mesh to be rendered.
Draws a metafile to the device context using the given source and destination rectangles.
-The metafile to draw.
The rectangle in the target where the metafile will be drawn, relative to the upper left corner (defined in DIPs) of the render target. If
The rectangle of the source metafile that will be drawn, relative to the upper left corner (defined in DIPs) of the metafile. If
Creates an image source which shares resources with an original.
-The original image.
Properties for the source image.
Receives the new image source.
If this method succeeds, it returns
This interface performs all the same functions as the
Creates a new, empty sprite batch. After creating a sprite batch, use
If this method succeeds, it returns
Renders all sprites in the given sprite batch to the device context using the specified drawing options.
-The sprite batch to draw.
The bitmap from which the sprites are to be sourced. Each sprite?s source rectangle refers to a portion of this bitmap.
The interpolation mode to use when drawing this sprite batch. This determines how Direct2D interpolates pixels within the drawn sprites if scaling is performed.
The bitmap from which the sprites are to be sourced. Each sprite?s source rectangle refers to a portion of this bitmap.
The interpolation mode to use when drawing this sprite batch. This determines how Direct2D interpolates pixels within the drawn sprites if scaling is performed.
The additional drawing options, if any, to be used for this sprite batch.
This interface performs all the same functions as the
Creates an SVG glyph style object.
-On completion points to the created
This method returns an
Represents a set of state and command buffers that are used to render to a target.
The device context can render to a target bitmap or a command list. -
-Any resource created from a device context can be shared with any other resource created from a device context when both contexts are created on the same device.
-Draws a text layout object. If the layout is not subsequently changed, this can be more efficient than DrawText when drawing the same layout repeatedly.
-The point, described in device-independent pixels, at which the upper-left corner of the text described by textLayout is drawn.
The formatted text to draw. Any drawing effects that do not inherit from
The brush used to paint the text.
The values for context-fill, context-stroke, and context-value that are used when rendering SVG glyphs.
The index used to select a color palette within a color font.
A value that indicates whether the text should be snapped to pixel boundaries and whether the text should be clipped to the layout rectangle. The default value is
Draws a color bitmap glyph run using one of the bitmap formats.
-Specifies the format of the glyph image. Supported formats are
Only one format can be specified at a time, combinations of flags are not valid input.
The origin of the baseline for the glyph run.
The glyphs to render.
Indicates the measuring method.
Specifies the pixel snapping policy when rendering color bitmap glyphs.
Draws a color glyph run that has the format of
The origin of the baseline for the glyph run.
The glyphs to render.
The brush used to paint the specified glyphs.
Values for context-fill, context-stroke, and context-value that are used when rendering SVG glyphs.
The index used to select a color palette within a color font. Note that this not the same as the paletteIndex in the
Indicates the measuring method used for text layout.
Retrieves an image of the color bitmap glyph from the color glyph cache. If the cache does not already contain the requested resource, it will be created. This method may be used to extend the lifetime of a glyph image even after it is evicted from the color glyph cache.
-The format for the glyph image. If there is no image data in the requested format for the requested glyph, this method will return an error.
The origin for the glyph.
Reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
The specified font size affects the choice of which bitmap to use from the font. It also affects the output glyphTransform, causing it to properly scale the glyph.
Index of the glyph.
If true, specifies that glyphs are rotated 90 degrees to the left and vertical metrics are used. Vertical writing is achieved by specifying isSideways as true and rotating the entire run 90 degrees to the right via a rotate transform.
The transform to apply to the image. This input transform affects the choice of which bitmap to use from the font. It is also factored into the output glyphTransform.
Dots per inch along the x-axis.
Dots per inch along the y-axis.
Output transform, which transforms from the glyph's space to the same output space as the worldTransform. This includes the input glyphOrigin, the glyph's offset from the glyphOrigin, and any other required transformations.
On completion contains the retrieved glyph image.
This method returns an
Retrieves an image of the SVG glyph from the color glyph cache. If the cache does not already contain the requested resource, it will be created. This method may be used to extend the lifetime of a glyph image even after it is evicted from the color glyph cache.
-Origin of the glyph.
Reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
The specified font size affects the output glyphTransform, causing it to properly scale the glyph.
Index of the glyph to retrieve.
If true, specifies that glyphs are rotated 90 degrees to the left and vertical metrics are used. Vertical writing is achieved by specifying isSideways as true and rotating the entire run 90 degrees to the right via a rotate transform.
The transform to apply to the image.
Describes how the area is painted.
The values for context-fill, context-stroke, and context-value that are used when rendering SVG glyphs.
The index used to select a color palette within a color font. Note that this not the same as the paletteIndex in the
Output transform, which transforms from the glyph's space to the same output space as the worldTransform. This includes the input glyphOrigin, the glyph's offset from the glyphOrigin, and any other required transformations.
On completion, contains the retrieved glyph image.
This method returns an
Represents a set of state and command buffers that are used to render to a target.
The device context can render to a target bitmap or a command list. -
-Any resource created from a device context can be shared with any other resource created from a device context when both contexts are created on the same device.
-Creates a color context from a DXGI color space type. It is only valid to use this with the Color Management Effect in 'Best' mode.
-The color space to create the color context from.
The created color context.
This method returns an
Issues drawing commands to a GDI device context.
-Binds the render target to the device context to which it issues drawing commands.
-The device context to which the render target issues drawing commands.
The dimensions of the handle to a device context (
If this method succeeds, it returns
Before you can render with the DC render target, you must use its BindDC method to associate it with a GDI DC. You do this each time you use a different DC, or the size of the area you want to draw to changes.
-This interface is used to describe a GPU rendering pass on a vertex or pixel shader. It is passed to
Sets the constant buffer for this transform's pixel shader.
-The data applied to the constant buffer.
The number of bytes of data in the constant buffer
If this method succeeds, it returns
Sets the resource texture corresponding to the given shader texture index.
-The index of the texture to be bound to the pixel shader.
The created resource texture.
If the method succeeds, it returns
Sets the constant buffer for this transform's vertex shader.
-The data applied to the constant buffer
The number of bytes of data in the constant buffer.
If the method succeeds, it returns
Set the shader instructions for this transform.
-The resource id for the shader.
Additional information provided to the renderer to indicate the operations the pixel shader does.
If the method succeeds, it returns
If this call fails, the corresponding
Specifying pixelOptions other than
Sets a vertex buffer, a corresponding vertex shader, and options to control how the vertices are to be handled by the Direct2D context.
-The vertex buffer, if this is cleared, the default vertex shader and mapping to the transform rectangles will be used.
Options that influence how the renderer will interact with the vertex shader.
How the vertices will be blended with the output texture.
The set of vertices to use from the buffer.
The
If the method succeeds, it returns
The vertex shaders associated with the vertex buffer through the vertex shader
If you pass the vertex option
blendDesc = { , , , , , , { 1.0f, 1.0f, 1.0f, 1.0f } };
If this call fails, the corresponding
If blendDescription is
Represents the drawing state of a render target: the antialiasing mode, transform, tags, and text-rendering options.
-Retrieves or sets the antialiasing mode, transform, and tags portion of the drawing state.
-Retrieves or sets the text-rendering configuration of the drawing state.
-Retrieves the antialiasing mode, transform, and tags portion of the drawing state.
-When this method returns, contains the antialiasing mode, transform, and tags portion of the drawing state. You must allocate storage for this parameter.
Specifies the antialiasing mode, transform, and tags portion of the drawing state.
-The antialiasing mode, transform, and tags portion of the drawing state.
Specifies the text-rendering configuration of the drawing state.
-The text-rendering configuration of the drawing state, or
Retrieves the text-rendering configuration of the drawing state.
-When this method returns, contains the address of a reference to an
Implementation of a drawing state block that adds the functionality of primitive blend in addition to already existing antialias mode, transform, tags and text rendering mode.
Note??You can get anGets or sets the antialiasing mode, transform, tags, primitive blend, and unit mode portion of the drawing state.
-Gets the antialiasing mode, transform, tags, primitive blend, and unit mode portion of the drawing state.
-When this method returns, contains the antialiasing mode, transform, tags, primitive blend, and unit mode portion of the drawing state. You must allocate storage for this parameter.
Sets the
A specialized implementation of the Shantzis calculations to a transform implemented on the GPU. These calculations are described in the paper A model for efficient and flexible image computing.
The information required to specify a ?Pass? in the rendering algorithm on a Pixel Shader is passed to the implementation through the SetDrawInfo method.
-A specialized implementation of the Shantzis calculations to a transform implemented on the GPU. These calculations are described in the paper A model for efficient and flexible image computing.
The information required to specify a ?Pass? in the rendering algorithm on a Pixel Shader is passed to the implementation through the SetDrawInfo method.
-Provides the GPU render info interface to the transform implementation.
-The interface supplied back to the calling method to allow it to specify the GPU based transform pass.
Any
The transform can maintain a reference to this interface for its lifetime. If any properties change on the transform, it can apply these changes to the corresponding drawInfo interface.
This is also used to determine that the corresponding nodes in the graph are dirty.
-Represents a basic image-processing construct in Direct2D.
-An effect takes zero or more input images, and has an output image. The images that are input into and output from an effect are lazily evaluated. This definition is sufficient to allow an arbitrary graph of effects to be created from the application by feeding output images into the input image of the next effect in the chain.
-Gets or sets the number of inputs to the effect.
-Gets the output image from the effect.
-The output image can be set as an input to another effect, or can be directly passed into the
It is also possible to use QueryInterface to retrieve the same output image.
-Sets the given input image by index.
-The index of the image to set.
The input image to set.
Whether to invalidate the graph at the location of the effect input
If the input index is out of range, the input image is ignored.
-Allows the application to change the number of inputs to an effect.
-The number of inputs to the effect.
The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | One or more arguments are invalid. |
E_OUTOFMEMORY | Failed to allocate necessary memory. |
?
Most effects do not support a variable number of inputs. Use
If the input count is less than the minimum or more than the maximum supported inputs, the call will fail.
If the input count is unchanged, the call will succeed with
Any inputs currently selected on the effect will be unaltered by this call unless the number of inputs is made smaller. If the number of inputs is made smaller, inputs beyond the selected range will be released.
If the method fails, the existing input and input count will remain unchanged.
-Represents a basic image-processing construct in Direct2D.
-An effect takes zero or more input images, and has an output image. The images that are input into and output from an effect are lazily evaluated. This definition is sufficient to allow an arbitrary graph of effects to be created from the application by feeding output images into the input image of the next effect in the chain.
-Gets the number of inputs to the effect.
-This method returns the number of inputs to the effect.
Gets the output image from the effect.
-When this method returns, contains the address of a reference to the output image for the effect.
The output image can be set as an input to another effect, or can be directly passed into the
It is also possible to use QueryInterface to retrieve the same output image.
-Provides factory methods and other state management for effect and transform authors.
-This interface is passed to an effect implementation through the
Each call to ID2D1Effect::Initialize will be provided a different
Gets the unit mapping that an effect will use for properties that could be in either dots per inch (dpi) or pixels.
-The dpi on the x-axis.
The dpi on the y-axis.
If the
Creates a Direct2D effect for the specified class ID. This is the same as
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
The specified effect is not registered by the system. |
?
The created effect does not reference count the DLL from which the effect was created. If the caller unregisters an effect while this effect is loaded, the resulting behavior is unpredictable.
-This indicates the maximum feature level from the provided list which is supported by the device. If none of the provided levels are supported, then this API fails with
The feature levels provided by the application.
The count of feature levels provided by the application
The maximum feature level from the featureLevels list which is supported by the D2D device.
Wraps an effect graph into a single transform node and then inserted into a transform graph. This allows an effect to aggregate other effects. This will typically be done in order to allow the effect properties to be re-expressed with a different contract, or to allow different components to integrate each-other?s effects.
-The effect to be wrapped in a transform node.
The returned transform node that encapsulates the effect graph.
This creates a blend transform that can be inserted into a transform graph.
-The number of inputs to the blend transform.
Describes the blend transform that is to be created.
The returned blend transform.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Creates a transform that extends its input infinitely in every direction based on the passed in extend mode.
-The extend mode in the X-axis direction.
The extend mode in the Y-axis direction.
The returned transform.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Creates and returns an offset transform.
-The offset amount.
When this method returns, contains the address of a reference to an offset transform object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
An offset transform is used to offset an input bitmap without having to insert a rendering pass. An offset transform is automatically inserted by an Affine transform if the transform evaluates to a pixel-aligned transform.
-Creates and returns a bounds adjustment transform.
-The initial output rectangle for the bounds adjustment transform.
The returned bounds adjustment transform.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
A support transform can be used for two different reasons.
Loads the given shader by its unique ID. Loading the shader multiple times is ignored. When the shader is loaded it is also handed to the driver to JIT, if it hasn?t been already.
-The unique id that identifies the shader.
The buffer that contains the shader to register.
The size of the shader buffer in bytes.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
The shader you specify must be compiled, not in raw HLSL code.
-Loads the given shader by its unique ID. Loading the shader multiple times is ignored. When the shader is loaded it is also handed to the driver to JIT, if it hasn?t been already.
-The unique id that identifies the shader.
The buffer that contains the shader to register.
The size of the shader buffer in bytes.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
The shader you specify must be compiled, not in raw HLSL code.
-Loads the given shader by its unique ID. Loading the shader multiple times is ignored. When the shader is loaded it is also handed to the driver to JIT, if it hasn?t been already.
-The unique id that identifies the shader.
The buffer that contains the shader to register.
The size of the shader buffer in bytes.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
The shader you specify must be compiled, not in raw HLSL code.
-This tests to see if the given shader is loaded.
-The unique id that identifies the shader.
Whether the shader is loaded.
Creates or finds the given resource texture, depending on whether a resource id is specified. It also optionally initializes the texture with the specified data.
-An optional reference to the unique id that identifies the lookup table.
The properties used to create the resource texture.
The optional data to be loaded into the resource texture.
An optional reference to the stride to advance through the resource texture, according to dimension.
The size, in bytes, of the data.
The returned texture that can be used as a resource in a Direct2D effect.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Finds the given resource texture if it has already been created with
Creates a vertex buffer or finds a standard vertex buffer and optionally initializes it with vertices. The returned buffer can be specified in the render info to specify both a vertex shader and or to pass custom vertices to the standard vertex shader used by Direct2D.
-The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
This finds the given vertex buffer if it has already been created with
Creates a color context from a color space.
If the color space is Custom, the context is initialized from the profile and profileSize parameters.
If the color space is not Custom, the context is initialized with the profile bytes associated with the color space. The profile and profileSize parameters are ignored.
-The space of color context to create.
A buffer containing the ICC profile bytes used to initialize the color context when space is
The size in bytes of Profile.
When this method returns, contains the address of a reference to a new color context object.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
Creates a color context by loading it from the specified filename. The profile bytes are the contents of the file specified by filename.
-The path to the file containing the profile bytes to initialize the color context with.
When this method returns, contains the address of a reference to a new color context.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
Creates a color context from an
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
The new color context can be used in
This indicates whether an optional capability is supported by the D3D device.
-The feature to query support for.
A structure indicating information about how or if the feature is supported.
The size of the featureSupportData parameter.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Indicates whether the buffer precision is supported by the underlying Direct2D device.
-Returns TRUE if the buffer precision is supported. Returns
Describes features of an effect.
-The effect whose input connection is being specified.
The input index of the effect that is being considered.
The amount of data that would be available on the input. This can be used to query this information when the data is not yet available.
Contains the center point, x-radius, and y-radius of an ellipse.
-The center point of the ellipse.
The X-radius of the ellipse.
The Y-radius of the ellipse.
Represents an ellipse.
-Gets the
Gets the
Creates Direct2D resources.
-The
A factory defines a set of CreateResource methods that can produce the following drawing resources:
To create an
Forces the factory to refresh any system defaults that it might have changed since factory creation.
-If this method succeeds, it returns
You should call this method before calling the GetDesktopDpi method, to ensure that the system DPI is current.
-Retrieves the current desktop dots per inch (DPI). To refresh this value, call ReloadSystemMetrics.
-Use this method to obtain the system DPI when setting physical pixel values, such as when you specify the size of a window.
- Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Geometry groups are a convenient way to group several geometries simultaneously so all figures of several distinct geometries are concatenated into one. To create a
Creates an
If this method succeeds, it returns
Geometry groups are a convenient way to group several geometries simultaneously so all figures of several distinct geometries are concatenated into one. To create a
Creates an
If this method succeeds, it returns
Geometry groups are a convenient way to group several geometries simultaneously so all figures of several distinct geometries are concatenated into one. To create a
Transforms the specified geometry and stores the result as an
If this method succeeds, it returns
Like other resources, a transformed geometry inherits the resource space and threading policy of the factory that created it. This object is immutable.
When stroking a transformed geometry with the DrawGeometry method, the stroke width is not affected by the transform applied to the geometry. The stroke width is only affected by the world transform.
-Creates an empty
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates a render target that renders to a Microsoft Windows Imaging Component (WIC) bitmap.
-The bitmap that receives the rendering output of the render target.
The rendering mode, pixel format, remoting options, DPI information, and the minimum DirectX support required for hardware rendering. For information about supported pixel formats, see Supported Pixel Formats and Alpha Modes.
When this method returns, contains the address of the reference to the
If this method succeeds, it returns
You must use
Your application should create render targets once and hold onto them for the life of the application or until the
Note?? This method isn't supported on Windows Phone and will fail when called on a device with error code 0x8899000b (?There is no hardware rendering device available for this operation?). Because the Windows Phone Emulator supports WARP rendering, this method will fail when called on the emulator with a different error code, 0x88982f80 (wincodec_err_unsupportedpixelformat).
-Creates an
If this method succeeds, it returns
When you create a render target and hardware acceleration is available, you allocate resources on the computer's GPU. By creating a render target once and retaining it as long as possible, you gain performance benefits. Your application should create render targets once and hold onto them for the life of the application or until the
Creates a render target that draws to a DirectX Graphics Infrastructure (DXGI) surface.
-The
The rendering mode, pixel format, remoting options, DPI information, and the minimum DirectX support required for hardware rendering. For information about supported pixel formats, see Supported Pixel Formats and Alpha Modes.
When this method returns, contains the address of the reference to the
If this method succeeds, it returns
To write to a Direct3D surface, you obtain an
A DXGI surface render target is a type of
The DXGI surface render target and the DXGI surface must use the same DXGI format. If you specify the DXGI_FORMAT_UNKOWN format when you create the render target, it will automatically use the surface's format.
The DXGI surface render target does not perform DXGI surface synchronization.
For more information about creating and using DXGI surface render targets, see the Direct2D and Direct3D Interoperability Overview.
To work with Direct2D, the Direct3D device that provides the
When you create a render target and hardware acceleration is available, you allocate resources on the computer's GPU. By creating a render target once and retaining it as long as possible, you gain performance benefits. Your application should create render targets once and hold onto them for the life of the application or until the render target's EndDraw method returns the
Creates a render target that draws to a Windows Graphics Device Interface (GDI) device context.
-The rendering mode, pixel format, remoting options, DPI information, and the minimum DirectX support required for hardware rendering. To enable the device context (DC) render target to work with GDI, set the DXGI format to
When this method returns, dcRenderTarget contains the address of the reference to the
If this method succeeds, it returns
Before you can render with a DC render target, you must use the render target's BindDC method to associate it with a GDI DC. Do this for each different DC and whenever there is a change in the size of the area you want to draw to.
To enable the DC render target to work with GDI, set the render target's DXGI format to
Your application should create render targets once and hold on to them for the life of the application or until the render target's EndDraw method returns the
Creates Direct2D resources.
- The
Creates a
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
?
The Direct2D device defines a resource domain in which a set of Direct2D objects and Direct2D device contexts can be used together. Each call to CreateDevice returns a unique
Creates a
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
It is valid to specify a dash array only if
Creates an
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
?
Creates a new drawing state block, this can be used in subsequent SaveDrawingState and RestoreDrawingState operations on the render target.
-The drawing state description structure.
The address of the newly created drawing state block.
The address of the newly created drawing state block.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
Creates a new
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
Registers an effect within the factory instance with the property XML specified as a stream.
-The identifier of the effect to be registered.
A list of the effect properties, types, and metadata.
An array of properties and methods.
This binds a property by name to a particular method implemented by the effect author to handle the property. The name must be found in the corresponding propertyXml.
The number of bindings in the binding array.
The static factory that is used to create the corresponding effect.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Direct2D effects must define their properties at registration time via registration XML. An effect declares several required system properties, and can also declare custom properties. See Custom effects for more information about formatting the propertyXml parameter.
RegisterEffect is both atomic and reference counted. To unregister an effect, call UnregisterEffect with the classId of the effect.
Important??RegisterEffect does not hold a reference to the DLL or executable file in which the effect is contained. The application must independently make sure that the lifetime of the DLL or executable file completely contains all instances of each registered and created effect.?Aside from the built-in effects that are globally registered, this API registers effects only for this factory, derived device, and device context interfaces.
-Registers an effect within the factory instance with the property XML specified as a string.
-The identifier of the effect to be registered.
A list of the effect properties, types, and metadata.
An array of properties and methods.
This binds a property by name to a particular method implemented by the effect author to handle the property. The name must be found in the corresponding propertyXml.
The number of bindings in the binding array.
The static factory that is used to create the corresponding effect.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
Direct2D effects must define their properties at registration time via registration XML. An effect declares several required system properties, and can also declare custom properties. See Custom effects for more information about formatting the propertyXml parameter.
RegisterEffect is both atomic and reference counted. To unregister an effect, call UnregisterEffect with the classId of the effect.
Important??RegisterEffect does not hold a reference to the DLL or executable file in which the effect is contained. The application must independently make sure that the lifetime of the DLL or executable file completely contains all instances of each registered and created effect.?Aside from the built-in effects that are globally registered, this API registers effects only for this factory and derived device and device context interfaces.
-Unregisters an effect within the factory instance that corresponds to the classId provided.
-The identifier of the effect to be unregistered.
In order for the effect to be fully unloaded, you must call UnregisterEffect the same number of times that you have registered the effect.
The UnregisterEffect method unregisters only those effects that are registered on the same factory. It cannot be used to unregister a built-in effect.
-Returns the class IDs of the currently registered effects and global effects on this factory.
-When this method returns, contains an array of effects.
The capacity of the effects array.
When this method returns, contains the number of effects copied into effects.
When this method returns, contains the number of effects currently registered in the system.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
HRESULT_FROM_WIN32( | effectsRegistered is larger than effectCount. |
?
The set of class IDs will be atomically returned by the API. The set will not be interrupted by other threads registering or unregistering effects.
If effectsRegistered is larger than effectCount, the supplied array will still be filled to capacity with the current set of registered effects. This method returns the CLSIDs for all global effects and all effects registered to this factory.
-Retrieves the properties of an effect.
-The ID of the effect to retrieve properties from.
When this method returns, contains the address of a reference to the property interface that can be used to query the metadata of the effect.
The returned effect properties will have all the mutable properties for the effect set to a default of
This method cannot be used to return the properties for any effect not visible to
Creates Direct2D resources.
This interface also enables the creation of
Creates an
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
D3DERR_OUTOFVIDEOMEMORY | Direct3D does not have enough display memory to perform the operation. |
?
The Direct2D device defines a resource domain in which a set of Direct2D objects and Direct2D device contexts can be used together. Each call to CreateDevice returns a unique
Provides access to an device context that can accept GDI drawing commands.
-You don't create an
Not all render targets support the
Note that the QueryInterface method always succeeds; if the render target doesn't support the
To test whether a given render target supports the
Retrieves the device context associated with this render target.
-A value that specifies whether the device context should be cleared.
When this method returns, contains the device context associated with this render target. You must allocate storage for this parameter.
Calling this method flushes the render target.
This command can be called only after BeginDraw and before EndDraw.
Note??In Windows?7 and earlier, you should not call GetDC between PushAxisAlignedClip/PopAxisAlignedClip commands or between PushLayer/PopLayer. However, this restriction does not apply to Windows?8 and later.?ReleaseDC must be called once for each call to GetDC.
-Indicates that drawing with the device context retrieved using the GetDC method is finished.
-If this method succeeds, it returns
ReleaseDC must be called once for each call to GetDC.
-The interpolation mode to be used with the 2D affine transform effect to scale the image. There are 6 scale modes that range in quality and speed.
-Samples the nearest single point and uses that. This mode uses less processing time, but outputs the lowest quality image.
Uses a four point sample and linear interpolation. This mode uses more processing time than the nearest neighbor mode, but outputs a higher quality image.
Uses a 16 sample cubic kernel for interpolation. This mode uses the most processing time, but outputs a higher quality image.
Uses 4 linear samples within a single pixel for good edge anti-aliasing. This mode is good for scaling down by small amounts on images with few pixels.
Uses anisotropic filtering to sample a pattern according to the transformed shape of the bitmap.
Uses a variable size high quality cubic kernel to perform a pre-downscale the image if downscaling is involved in the transform matrix. Then uses the cubic interpolation mode for the final output.
Identifiers for properties of the 2D affine transform effect.
-Specifies how the alpha value of a bitmap or render target should be treated.
-The
The alpha value might not be meaningful.
The alpha value has been premultiplied. Each color is first scaled by the alpha value. The alpha value itself is the same in both straight and premultiplied alpha. Typically, no color channel value is greater than the alpha channel value. If a color channel value in a premultiplied format is greater than the alpha channel, the standard source-over blending math results in an additive blend.
The alpha value has not been premultiplied. The alpha channel indicates the transparency of the color.
The alpha value is ignored.
Specifies how the edges of nontext primitives are rendered.
-Edges are antialiased using the Direct2D per-primitive method of high-quality antialiasing.
Objects are aliased in most cases. Objects are antialiased only when they are drawn to a render target created by the CreateDxgiSurfaceRenderTarget method and Direct3D multisampling has been enabled on the backing DirectX Graphics Infrastructure (DXGI) surface.
Specifies whether an arc should be greater than 180 degrees.
-An arc's sweep should be 180 degrees or less.
An arc's sweep should be 180 degrees or greater.
Identifiers for the properties of the Arithmetic composite effect.
-Identifiers for properties of the Atlas effect.
-Specifies the algorithm that is used when images are scaled or rotated.
Note??Starting in Windows?8, more interpolations modes are available. See To stretch an image, each pixel in the original image must be mapped to a group of pixels in the larger image. To shrink an image, groups of pixels in the original image must be mapped to single pixels in the smaller image. The effectiveness of the algorithms that perform these mappings determines the quality of a scaled image. Algorithms that produce higher-quality scaled images tend to require more processing time.
Specifies how a bitmap can be used.
-The bitmap is created with default properties.
The bitmap can be used as a device context target.
The bitmap cannot be used as an input.
The bitmap can be read from the CPU.
The bitmap works with
Specifies the alpha mode of the output of the Bitmap source effect.
-The interpolation mode used to scale the image in the Bitmap source effect. If the mode disables the mipmap, then BitmapSouce will cache the image at the resolution determined by the Scale and EnableDPICorrection properties.
-Speficies whether a flip and/or rotation operation should be performed by the Bitmap source effect
-Identifiers for properties of the Bitmap source effect.
-Specifies how one of the color sources is to be derived and optionally specifies a preblend operation on the color source.
-This enumeration has the same numeric values as D3D10_BLEND.
-The data source is black (0, 0, 0, 0). There is no preblend operation.
The data source is white (1, 1, 1, 1). There is no preblend operation.
The data source is color data (RGB) from the second input of the blend transform. There is not a preblend operation.
The data source is color data (RGB) from second input of the blend transform. The preblend operation inverts the data, generating 1 - RGB.
The data source is alpha data (A) from second input of the blend transform. There is no preblend operation.
The data source is alpha data (A) from the second input of the blend transform. The preblend operation inverts the data, generating 1 - A.
The data source is alpha data (A) from the first input of the blend transform. There is no preblend operation.
The data source is alpha data (A) from the first input of the blend transform. The preblend operation inverts the data, generating 1 - A.
The data source is color data from the first input of the blend transform. There is no preblend operation.
The data source is color data from the first input of the blend transform. The preblend operation inverts the data, generating 1 - RGB.
The data source is alpha data from the second input of the blend transform. The preblend operation clamps the data to 1 or less.
The data source is the blend factor. There is no preblend operation.
The data source is the blend factor. The preblend operation inverts the blend factor, generating 1 - blend_factor.
The blend mode used for the Blend effect.
-Specifies the blend operation on two color sources.
-This enumeration has the same numeric values as D3D10_BLEND_OP.
-Add source 1 and source 2.
Subtract source 1 from source 2.
Subtract source 2 from source 1.
Find the minimum of source 1 and source 2.
Find the maximum of source 1 and source 2.
Identifiers for properties of the Blend effect.
-The edge mode for the Border effect.
-Specifies how the Crop effect handles the crop rectangle falling on fractional pixel coordinates.
-Identifiers for properties of the Border effect.
-Identifiers for the properties of the Brightness effect.
-Represents the bit depth of the imaging pipeline in Direct2D.
-The buffer precision is not specified.
Use 8-bit normalized integer per channel.
Use 8-bit normalized integer standard RGB data per channel.
Use 16-bit normalized integer per channel.
Use 16-bit floats per channel.
Use 32-bit floats per channel.
Describes the shape at the end of a line or segment.
-The following illustration shows the available cap styles for lines or segments. The red portion of the line shows the extra area added by the line cap setting.
-A cap that does not extend past the last point of the line. Comparable to cap used for objects other than lines.
Half of a square that has a length equal to the line thickness.
A semicircle that has a diameter equal to the line thickness.
An isosceles right triangle whose hypotenuse is equal in length to the thickness of the line.
Describes flags that influence how the renderer interacts with a custom vertex shader.
-There were no changes.
The properties of the effect changed.
The context state changed.
The effect?s transform graph has changed. This happens only when an effect supports a variable input count.
Allows a caller to control the channel depth of a stage in the rendering pipeline.
-The channel depth is the default. It is inherited from the inputs.
The channel depth is 1.
The channel depth is 4.
Specifies the color channel the Displacement map effect extracts the intensity from and uses it to spatially displace the image in the X or Y direction.
-Identifiers for properties of the Chroma-key effect.
-Specifies the pixel snapping policy when rendering color bitmap glyphs.
-Color bitmap glyph positions are snapped to the nearest pixel if the bitmap resolution matches that of the device context.
Color bitmap glyph positions are not snapped.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Defines how to interpolate between colors.
-Colors are interpolated with straight alpha.
Colors are interpolated with premultiplied alpha.
Indicates how the Color management effect should interpret alpha data that is contained in the input image.
-Identifiers for the properties of the Color management effect.
-The quality level of the transform for the Color management effect.
-Specifies which ICC rendering intent the Color management effect should use.
-The alpha mode of the output of the Color matrix effect.
-Identifiers for the properties of the Color matrix effect.
-Defines options that should be applied to the color space.
-The color space is otherwise described, such as with a color profile.
The color space is sRGB.
The color space is scRGB.
Specifies the different methods by which two geometries can be combined.
-The following illustration shows the different geometry combine modes. -
-The two regions are combined by taking the union of both. Given two geometries, A and B, the resulting geometry is geometry A + geometry B.
The two regions are combined by taking their intersection. The new area consists of the overlapping region between the two geometries.
The two regions are combined by taking the area that exists in the first region but not the second and the area that exists in the second region but not the first. Given two geometries, A and B, the new region consists of (A-B) + (B-A).
The second region is excluded from the first. Given two geometries, A and B, the area of geometry B is removed from the area of geometry A, producing a region that is A-B.
Specifies additional features supportable by a compatible render target when it is created. This enumeration allows a bitwise combination of its member values.
-Use this enumeration when creating a compatible render target with the CreateCompatibleRenderTarget method. For more information about compatible render targets, see the Render Targets Overview.
The
The render target supports no additional features.
The render target supports interoperability with the Windows Graphics Device Interface (GDI).
Used to specify the blend mode for all of the Direct2D blending operations.
-The figure here shows an example of each of the modes with images that have an opacity of 1.0 or 0.5.
There can be slightly different interpretations of these enumeration values depending on where the value is used.
With a composite effect: -
D2D1_COMPOSITE_MODE_DESTINATION_COPY is equivalent to As a parameter to
The standard source-over-destination blend mode.
The destination is rendered over the source.
Performs a logical clip of the source pixels against the destination pixels.
The inverse of the
This is the logical inverse to
The is the logical inverse to
Writes the source pixels over the destination where there are destination pixels.
The logical inverse of
The source is inverted with the destination.
The channel components are summed.
The source is copied to the destination; the destination pixels are ignored.
Equivalent to
Destination colors are inverted according to a source mask. -
Identifiers for properties of the Composite effect.
-Identifiers for properties of the Contrast effect.
-Identifiers for properties of the Convolve matrix effect.
-The interpolation mode the Convolve matrix effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-Identifiers for properties of the Crop effect.
-This effect combines two images by adding weighted pixels from input images. It has two inputs, named Destination and Source.
The cross fade formula is output = weight * Destination + (1 - weight) * Source.
The CLSID for this effect is
Describes the sequence of dashes and gaps in a stroke.
-The following illustration shows several available dash styles.
-A solid line with no breaks.
A dash followed by a gap of equal length. The dash and the gap are each twice as long as the stroke thickness.
The equivalent dash array for
A dot followed by a longer gap.
The equivalent dash array for
A dash, followed by a gap, followed by a dot, followed by another gap.
The equivalent dash array for
A dash, followed by a gap, followed by a dot, followed by another gap, followed by another dot, followed by another gap.
The equivalent dash array for
The dash pattern is specified by an array of floating-point values.
Indicates the type of information provided by the Direct2D Debug Layer.
-To receive debugging messages, you must install the Direct2D Debug Layer.
-Specifies how a device context is initialized for GDI rendering when it is retrieved from the render target.
-Use this enumeration with the
The current contents of the render target are copied to the device context when it is initialized.
The device context is cleared to transparent black when it is initialized.
This specifies options that apply to the device context for its lifetime.
-The device context is created with default options.
Distribute rendering work across multiple threads. Refer to Improving the performance of Direct2D apps for additional notes on the use of this flag.
Specifies the optimization mode for the Directional blur effect.
-Identifiers for properties of the Directional blur effect.
-Identifiers for properties of the Discrete transfer effect.
-Identifiers for properties of the Displacement map effect.
-Identifiers for properties of the Distant-diffuse lighting effect.
-The interpolation mode the effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-Samples the nearest single point and uses that. This mode uses less processing time, but outputs the lowest quality image.
Uses a four point sample and linear interpolation. This mode outputs a higher quality image than nearest neighbor.
Uses a 16 sample cubic kernel for interpolation. This mode uses the most processing time, but outputs a higher quality image.
Uses 4 linear samples within a single pixel for good edge anti-aliasing. This mode is good for scaling down by small amounts on images with few pixels.
Uses anisotropic filtering to sample a pattern according to the transformed shape of the bitmap.
Uses a variable size high quality cubic kernel to perform a pre-downscale the image if downscaling is involved in the transform matrix. Then uses the cubic interpolation mode for the final output.
Identifiers for properties of the Distant-specular lighting effect.
-The interpolation mode the Distant-specular lighting effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-The interpolation mode the DPI compensation effect uses to scale the image.
-Identifiers for properties of the DPI compensation effect.
-Specifies whether text snapping is suppressed or clipping to the layout rectangle is enabled. This enumeration allows a bitwise combination of its member values.
-Text is not vertically snapped to pixel boundaries. This setting is recommended for text that is being animated.
Text is clipped to the layout rectangle.
In Windows?8.1 and later, text is rendered using color versions of glyphs, if defined by the font.
Bitmap origins of color glyph bitmaps are not snapped.
Text is vertically snapped to pixel boundaries and is not clipped to the layout rectangle.
Values for the
Identifiers for properties of the Edge Detection effect.
-The
The
The
The
The
Identifiers for properties of the Emboss effect.
-Identifiers for properties of the Exposure effect.
-Specifies how a brush paints areas outside of its normal content area.
-For an
Repeat the edge pixels of the brush's content for all regions outside the normal content area.
Repeat the brush's content.
The same as
Specifies whether Direct2D provides synchronization for an
When you create a factory, you can specify whether it is multithreaded or singlethreaded. A singlethreaded factory provides no serialization against any other single threaded instance within Direct2D, so this mechanism provides a very large degree of scaling on the CPU.
You can also create a multithreaded factory instance. In this case, the factory and all derived objects can be used from any thread, and each render target can be rendered to independently. Direct2D serializes calls to these objects, so a single multithreaded Direct2D instance won't scale as well on the CPU as many single threaded instances. However, the resources can be shared within the multithreaded instance.
Note the qualifier "On the CPU": GPUs generally take advantage of fine-grained parallelism more so than CPUs. For example, multithreaded calls from the CPU might still end up being serialized when being sent to the GPU; however, a whole bank of pixel and vertex shaders will run in parallel to perform the rendering.
-Defines capabilities of the underlying Direct3D device which may be queried using
Describes the minimum DirectX support required for hardware rendering by a render target.
-Direct2D determines whether the video card provides adequate hardware rendering support.
The video card must support DirectX 9.
The video card must support DirectX 10.
Indicates whether a specific
Indicates whether a specific
Specifies how the intersecting areas of geometries or figures are combined to form the area of the composite geometry.
-Use the
Direct2D fills the interior of a path by using one of the two fill modes specified by this enumeration:
To see the difference between the winding and alternate fill modes, assume that you have four circles with the same center and a different radius, as shown in the following illustration. The first one has the radius of 25, the second 50, the third 75, and the fourth 100.
The following illustration shows the shape filled by using the alternate fill mode. Notice that the center and third ring are not filled. This is because a ray drawn from any point in either of those two rings passes through an even number of segments.
The following illustration explains this process.
The following illustration shows how the same shape is filled when the winding fill mode is specified.
Notice that all the rings are filled. This is because all the segments run in the same direction, so a ray drawn from any point will cross one or more segments, and the sum of the crossings will not equal zero.
The following illustration explains this process. The red arrows represent the direction in which the segments are drawn and the black arrow represents an arbitrary ray that runs from a point in the innermost ring. Starting with a value of zero, for each segment that the ray crosses, a value of one is added for every clockwise intersection. All points lie in the fill region in this illustration, because the count does not equal zero.
-Determines whether a point is in the fill region by drawing a ray from that point to infinity in any direction, and then counting the number of path segments within the given shape that the ray crosses. If this number is odd, the point is in the fill region; if even, the point is outside the fill region.
Determines whether a point is in the fill region of the path by drawing a ray from that point to infinity in any direction, and then examining the places where a segment of the shape crosses the ray. Starting with a count of zero, add one each time a segment crosses the ray from left to right and subtract one each time a path segment crosses the ray from right to left, as long as left and right are seen from the perspective of the ray. After counting the crossings, if the result is zero, then the point is outside the path. Otherwise, it is inside the path.
Represents filtering modes that a transform may select to use on input textures.
-This enumeration has the same numeric values as
Use point sampling for minification, magnification, and mip-level sampling.
Use point sampling for minification and magnification; use linear interpolation for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification; use point sampling for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification; use linear interpolation for mip-level sampling.
Use linear interpolation for minification and magnification; use point sampling for mip-level sampling.
Use linear interpolation for minification, magnification, and mip-level sampling.
Use anisotropic interpolation for minification, magnification, and mip-level sampling.
Identifiers for properties of the Flood effect.
-Specifies which gamma is used for interpolation.
-Interpolating in a linear gamma space (
The first gradient is interpolated linearly in the space of the render target (sRGB in this case), and one can see the dark bands between each color. The second gradient uses a gamma-correct linear interpolation, and thus does not exhibit the same variations in brightness.
-Interpolation is performed in the standard RGB (sRGB) gamma.
Interpolation is performed in the linear-gamma color space.
Specifies which gamma is used for interpolation.
-Interpolating in a linear gamma space (
The first gradient is interpolated linearly in the space of the render target (sRGB in this case), and one can see the dark bands between each color. The second gradient uses a gamma-correct linear interpolation, and thus does not exhibit the same variations in brightness.
-Interpolation is performed in the standard RGB (sRGB) gamma.
Interpolation is performed in the linear-gamma color space.
Identifiers for properties of the Gamma transfer effect.
-The optimization mode for the Gaussian blur effect.
-Identifiers for properties of the Gaussian blur effect.
-Describes how one geometry object is spatially related to another geometry object.
-The relationship between the two geometries cannot be determined. This value is never returned by any D2D method.
The two geometries do not intersect at all.
The instance geometry is entirely contained by the passed-in geometry.
The instance geometry entirely contains the passed-in geometry.
The two geometries overlap but neither completely contains the other.
Specifies how a geometry is simplified to an
Specifies which formats are supported in the font, either at a font-wide level or per glyph.
-Indicates no data is available for this glyph.
The glyph has TrueType outlines.
The glyph has CFF outlines.
The glyph has multilayered COLR data.
The glyph has SVG outlines as standard XML. Fonts may store the content gzip'd rather than plain text, indicated by the first two bytes as gzip header {0x1F 0x8B}.
The glyph has PNG image data, with standard PNG IHDR.
The glyph has JPEG image data, with standard JIFF SOI header.
The glyph has TIFF image data.
The glyph has raw 32-bit premultiplied BGRA data.
Values for the
Identifiers for properties of the Highlights and Shadows effect.
-Identifiers for properties of the Histogram effect.
-Identifiers for properties of the Hue rotate effect.
-Values for the
Identifiers for properties of the Hue to RGB effect.
-Option flags controlling primary conversion performed by CreateImageSourceFromDxgi, if any.
-Controls option flags for a new
?
D2D1_IMAGE_SOURCE_CREATION_OPTIONS_RELEASE_SOURCE causes the image source to not retain a reference to the source object used to create it. It can decrease the quality and efficiency of printing.
-No options are used.
Indicates the image source should release its reference to the WIC bitmap source after it has initialized. By default, the image source retains a reference to the WIC bitmap source for the lifetime of the object to enable quality and speed optimizations for printing. This option disables that optimization. -
Indicates the image source should only populate subregions of the image cache on-demand. You can control this behavior using the EnsureCached and TrimCache methods. This options provides the ability to improve memory usage by only keeping needed portions of the image in memory. This option requires that the image source has a reference to the WIC bitmap source, and is incompatible with
Specifies the appearance of the ink nib (pen tip) as part of an
This is used to specify the quality of image scaling with
Specifies options that can be applied when a layer resource is applied to create a layer.
Note??Starting in Windows?8, theClearType antialiasing must use the current contents of the render target to blend properly. When a pushed layer requests initializing for ClearType, Direct2D copies the current contents of the render target into the layer so that ClearType antialiasing can be performed. Rendering ClearType text into a transparent layer does not produce the desired results.
A small performance hit from re-copying content occurs when
Specifies how the layer contents should be prepared. -
-Default layer behavior. A premultiplied layer target is pushed and its contents are cleared to transparent black. -
The layer is not cleared to transparent black.
The layer is always created as ignore alpha. All content rendered into the layer will be treated as opaque.
Identifiers for properties of the Linear transfer effect.
-Describes the shape that joins two lines or segments.
- A miter limit affects how sharp miter joins are allowed to be. If the line join style is
The following illustration shows different line join settings for the same stroked path geometry.
-Regular angular vertices.
Beveled vertices.
Rounded vertices.
Regular angular vertices unless the join would extend beyond the miter limit; otherwise, beveled vertices.
Identifiers for the properties of the 3D Lookup Table effect.
-The
The
Specifies how the memory to be mapped from the corresponding
The
These flags will be not be able to be used on bitmaps created by the
Indicates the measuring method used for text layout.
-Specifies that text is measured using glyph ideal metrics whose values are independent to the current display resolution.
Specifies that text is measured using glyph display-compatible metrics whose values tuned for the current display resolution.
Specifies that text is measured using the same glyph display metrics as text measured by GDI using a font created with CLEARTYPE_NATURAL_QUALITY.
The mode for the Morphology effect.
-Identifiers for properties of the Morphology effect.
-Describes whether an opacity mask contains graphics or text. Direct2D uses this information to determine which gamma space to use when blending the opacity mask.
-The opacity mask contains graphics. The opacity mask is blended in the gamma 2.2 color space.
The opacity mask contains non-GDI text. The gamma space used for blending is obtained from the render target's text rendering parameters. (
The opacity mask contains text rendered using the GDI-compatible rendering mode. The opacity mask is blended using the gamma for GDI rendering.
Identifiers for properties of the Opacity metadata effect.
-This effect adjusts the opacity of an image by multiplying the alpha channel of the input by the specified opacity value. It has a single input.
The CLSID for this effect is
Specifies the flip and rotation at which an image appears.
-The orientation is unchanged.
The image is flipped horizontally.
The image is rotated clockwise 180 degrees.
The image is rotated clockwise 180 degrees, then flipped horizontally.
The image is rotated clockwise 90 degrees, then flipped horizontally.
The image is rotated clockwise 270 degrees.
The image is rotated clockwise 270 degrees, then flipped horizontally.
The image is rotated clockwise 90 degrees.
Specifies how to render gradient mesh edges.
-Render this patch edge aliased. Use this value for the internal edges of your gradient mesh.
Render this patch edge antialiased. Use this value for the external (boundary) edges of your mesh.
Render this patch edge aliased and also slightly inflated. Use this for the internal edges of your gradient mesh when there could be t-junctions among patches. Inflating the internal edges mitigates seams that can appear along those junctions.
Indicates whether a segment should be stroked and whether the join between this segment and the previous one should be smooth. This enumeration allows a bitwise combination of its member values.
-The segment is joined as specified by the
The segment is not stroked.
The segment is always joined with the one preceding it using a round line join, regardless of which
The interpolation mode the 3D perspective transform effect uses on the image. There are 5 scale modes that range in quality and speed.
-Identifiers for the properties of the 3D perspective transform effect.
-Indicates how pixel shader sampling will be restricted. This indicates whether the vertex buffer is large and tends to change infrequently or smaller and changes frequently (typically frame over frame).
- If the shader specifies
The pixel shader is not restricted in its sampling.
The pixel shader samples inputs only at the same scene coordinate as the output pixel and returns transparent black whenever the input pixels are also transparent black.
Identifiers for properties of the Point-diffuse lighting effect.
-The interpolation mode the Point-diffuse lighting effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed
-Identifiers for properties of the Point-specular lighting effect.
-The interpolation mode the Point-specular lighting effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-Identifiers for properties of the Posterize effect.
-Describes how a render target behaves when it presents its content. This enumeration allows a bitwise combination of its member values.
-The render target waits until the display refreshes to present and discards the frame upon presenting.
The render target does not discard the frame upon presenting.
The render target does not wait until the display refreshes to present.
Used to specify the geometric blend mode for all Direct2D primitives.
-The standard source-over-destination blend mode.
The source is copied to the destination; the destination pixels are ignored.
The resulting pixel values use the minimum of the source and destination pixel values. Available in Windows?8 and later.
The resulting pixel values are the sum of the source and destination pixel values. Available in Windows?8 and later.
Defines when font resources should be subset during printing.
-Uses a heuristic strategy to decide when to subset fonts.
Note??If the print driver has requested archive-optimized content, then Direct2D will subset fonts once, for the entire document. ?Subsets and embeds font resources in each page, then discards that font subset after the page is printed out.
Sends out the original font resources without subsetting along with the page that first uses the font, and re-uses the font resources for later pages without resending them.
Specifies the indices of the system properties present on the
Under normal circumstances the minimum and maximum number of inputs to the effect are the same. If the effect supports a variable number of inputs, the ID2D1Effect::SetNumberOfInputs method can be used to choose the number that the application will enable.
-Specifies the types of properties supported by the Direct2D property interface.
-An unknown property.
An arbitrary-length string.
A 32-bit integer value constrained to be either 0 or 1.
An unsigned 32-bit integer.
A signed 32-bit integer.
A 32-bit float.
Two 32-bit float values.
Three 32-bit float values.
Four 32-bit float values.
An arbitrary number of bytes.
A returned COM or nano-COM interface.
An enumeration. The value should be treated as a UINT32 with a defined array of fields to specify the bindings to human-readable strings.
An enumeration. The value is the count of sub-properties in the array. The set of array elements will be contained in the sub-property.
A CLSID.
A 3x2 matrix of float values.
A 4x2 matrix of float values.
A 4x4 matrix of float values.
A 5x4 matrix of float values.
A nano-COM color context interface reference.
The rendering priority affects the extent to which Direct2D will throttle its rendering workload.
-Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Describes how a render target is remoted and whether it should be GDI-compatible. This enumeration allows a bitwise combination of its member values.
-The render target attempts to use Direct3D command-stream remoting and uses bitmap remoting if stream remoting fails. The render target is not GDI-compatible.
The render target renders content locally and sends it to the terminal services client as a bitmap.
The render target can be used efficiently with GDI.
Values for the
Indentifiers for properties of the RGB to Hue effect.
-Identifiers for properties of the Saturation effect.
-The interpolation mode the Scale effect uses to scale the image. There are 6 scale modes that range in quality and speed.
-Identifiers for properties of the Scale effect.
-Identifiers for properties of the Sepia effect.
-The level of performance optimization for the Shadow effect.
-Identifiers for properties of the Shadow effect.
-Identifiers for properties of the Sharpen effect.
-Identifiers for properties of the Spot-diffuse lighting effect.
-The interpolation mode the Spot-diffuse lighting effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-Identifiers for properties of the Spot-specular lighting effect.
-The interpolation mode the Spot-specular lighting effect uses to scale the image to the corresponding kernel unit length. There are six scale modes that range in quality and speed.
-Specifies additional aspects of how a sprite batch is to be drawn, as part of a call to
Identifiers for properties of the Straighten effect.
-Values for the
Defines how the world transform, dots per inch (dpi), and stroke width affect the shape of the pen used to stroke a primitive.
-If you specify
If you specify
If you specify
Apart from the stroke, any value derived from the stroke width is not affected when the transformType is either fixed or hairline. This includes miters, line caps and so on.
It is important to distinguish between the geometry being stroked and the shape of the stroke pen. When
Here is an illustration of a stroke with dashing and a skew and stretch transform.
And here is an illustration of a fixed width stroke which does not get transformed.
-The stroke respects the currently set world transform, the dpi, and the stroke width.
The stroke does not respect the world transform but it does respect the dpi and stroke width.
The stroke is forced to 1 pixel wide (in device space) and does not respect the world transform, the dpi, or the stroke width.
Specifies the indices of the system sub-properties that may be present in any property.
-The name for the parent property.
A Boolean indicating whether the parent property is writeable.
The minimum value that can be set to the parent property.
The maximum value that can be set to the parent property.
The default value of the parent property.
An array of name/index pairs that indicate the possible values that can be set to the parent property.
An index sub-property used by the elements of the
Describes how a render target behaves when it presents its content. This enumeration allows a bitwise combination of its member values.
-The render target waits until the display refreshes to present and discards the frame upon presenting.
The render target does not discard the frame upon presenting.
The render target does not wait until the display refreshes to present.
Indicates how pixel shader sampling will be restricted. This indicates whether the vertex buffer is large and tends to change infrequently or smaller and changes frequently (typically frame over frame).
- If the shader specifies
The pixel shader is not restricted in its sampling.
The pixel shader samples inputs only at the same scene coordinate as the output pixel and returns transparent black whenever the input pixels are also transparent black.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Indicates how pixel shader sampling will be restricted. This indicates whether the vertex buffer is large and tends to change infrequently or smaller and changes frequently (typically frame over frame).
- If the shader specifies
The pixel shader is not restricted in its sampling.
The pixel shader samples inputs only at the same scene coordinate as the output pixel and returns transparent black whenever the input pixels are also transparent black.
Describes the shape at the end of a line or segment.
-The following illustration shows the available cap styles for lines or segments. The red portion of the line shows the extra area added by the line cap setting.
-A cap that does not extend past the last point of the line. Comparable to cap used for objects other than lines.
Half of a square that has a length equal to the line thickness.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
The render target uses hardware rendering only.
Describes whether a render target uses hardware or software rendering, or if Direct2D should select the rendering mode.
-Not every render target supports hardware rendering. For more information, see the Render Targets Overview.
-The render target uses hardware rendering, if available; otherwise, it uses software rendering.
The render target uses software rendering only.
Defines options that should be applied to the color space.
-The color space is otherwise described, such as with a color profile.
The color space is sRGB.
Defines the direction that an elliptical arc is drawn.
-Arcs are drawn in a counterclockwise (negative-angle) direction.
Arcs are drawn in a clockwise (positive-angle) direction.
Identifiers for properties of the Table transfer effect.
-Identifiers for properties of the Temperature and Tint effect.
-Describes the antialiasing mode used for drawing text.
-This enumeration is used with the SetTextAntialiasMode of an
By default, Direct2D renders text in ClearType mode. Factors that can downgrade the default quality to grayscale or aliased:
Use the system default. See Remarks.
Use ClearType antialiasing.
Use grayscale antialiasing.
Do not use antialiasing.
Specifies the threading mode used while simultaneously creating the device, factory, and device context. -
-Resources may only be invoked serially. Device context state is not protected from multi-threaded access.
Resources may be invoked from multiple threads. Resources use interlocked reference counting and their state is protected. -
Identifiers for properties of the Tile effect.
-This effect tints the source image by multiplying the source image by the specified color. It has a single input.
The CLSID for this effect is
The interpolation mode the 3D transform effect uses on the image. There are 5 scale modes that range in quality and speed.
-Identifiers for properties of the 3D transform effect.
-Option flags for transformed image sources.
-No option flags.
Prevents the image source from being automatically scaled (by a ratio of the context DPI divided by 96) while drawn.
The turbulence noise mode for the Turbulence effect. Indicates whether to generate a bitmap based on Fractal Noise or the Turbulence function.
-Identifiers for properties of the Turbulence effect.
-Specifies how units in Direct2D will be interpreted.
-Setting the unit mode to
Units will be interpreted as device-independent pixels (1/96").
Units will be interpreted as pixels.
Describes flags that influence how the renderer interacts with a custom vertex shader.
-The logical equivalent of having no flags set.
If this flag is set, the renderer assumes that the vertex shader will cover the entire region of interest with vertices and need not clear the destination render target. If this flag is not set, the renderer assumes that the vertices do not cover the entire region interest and must clear the render target to transparent black first.
The renderer will use a depth buffer when rendering custom vertices. The depth buffer will be used for calculating occlusion information. This can result in the renderer output being draw-order dependent if it contains transparency.
Indicates that custom vertices do not overlap each other.
Indicates whether the vertex buffer changes infrequently or frequently.
-If a dynamic vertex buffer is created, Direct2D will not necessarily map the buffer directly to a Direct3D vertex buffer. Instead, a system memory copy can be copied to the rendering engine vertex buffer as the effects are rendered.
-The created vertex buffer is updated infrequently.
The created vertex buffer is changed frequently.
Identifiers for properties of the Vignette effect.
-Describes whether a window is occluded.
-If the window was occluded the last time EndDraw was called, the next time the render target calls CheckWindowState, it returns
The window is not occluded.
The window is occluded.
Specifies the chroma subsampling of the input chroma image used by the YCbCr effect.
-Specifies the interpolation mode for the YCbCr effect.
-Identifiers for properties of the YCbCr effect.
-Defines an object that paints an area. Interfaces that derive from
An
Brush space in Direct2D is specified differently than in XPS and Windows Presentation Foundation (WPF). In Direct2D, brush space is not relative to the object being drawn, but rather is the current coordinate system of the render target, transformed by the brush transform, if present. To paint an object as it would be painted by a WPF brush, you must translate the brush space origin to the upper-left corner of the object's bounding box, and then scale the brush space so that the base tile fills the bounding box of the object.
For more information about brushes, see the Brushes Overview.
-Gets or sets the degree of opacity of this brush.
-Gets or sets the transform applied to this brush.
-When the brush transform is the identity matrix, the brush appears in the same coordinate space as the render target in which it is drawn.
-Sets the degree of opacity of this brush.
-A value between zero and 1 that indicates the opacity of the brush. This value is a constant multiplier that linearly scales the alpha value of all pixels filled by the brush. The opacity values are clamped in the range 0?1 before they are multipled together.
Sets the transformation applied to the brush.
-The transformation to apply to this brush.
When you paint with a brush, it paints in the coordinate space of the render target. Brushes do not automatically position themselves to align with the object being painted; by default, they begin painting at the origin (0, 0) of the render target.
You can "move" the gradient defined by an
To align the content of an
The following illustrations show the effect of using an
The illustration on the right shows the result of transforming the
Gets the degree of opacity of this brush.
-A value between zero and 1 that indicates the opacity of the brush. This value is a constant multiplier that linearly scales the alpha value of all pixels filled by the brush. The opacity values are clamped in the range 0?1 before they are multipled together.
Gets the transform applied to this brush.
-The transform applied to this brush.
When the brush transform is the identity matrix, the brush appears in the same coordinate space as the render target in which it is drawn.
-Represents the set of transforms implemented by the effect-rendering system, which provides fixed-functionality.
-Sets the properties of the output buffer of the specified transform node.
-The number of bits and the type of the output buffer.
The number of channels in the output buffer (1 or 4).
The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | One or more arguments are not valid |
?
You can use the
The available channel depth and precision depend on the capabilities of the underlying Microsoft Direct3D device.
-Sets whether the output of the specified transform is cached.
-TRUE if the output should be cached; otherwise,
Provides factory methods and other state management for effect and transform authors.
-Creates a 3D lookup table for mapping a 3-channel input to a 3-channel output. The table data must be provided in 4-channel format.
-Precision of the input lookup table data.
Number of lookup table elements per dimension (X, Y, Z).
Buffer holding the lookup table data.
Size of the lookup table data buffer.
An array containing two values. The first value is the size in bytes from one row (X dimension) of LUT data to the next. The second value is the size in bytes from one LUT data plane (X and Y dimensions) to the next.
Receives the new lookup table instance.
If this method succeeds, it returns
Creates Direct2D resources. This interface also enables the creation of
Creates an
If this method succeeds, it returns
Creates Direct2D resources. This interface also enables the creation of
Creates an
If this method succeeds, it returns
Creates Direct2D resources. This interface also enables the creation of
Creates an
This method returns an
Creates Direct2D resources.
-The
A factory defines a set of CreateResource methods that can produce the following drawing resources:
To create an
Creates Direct2D resources.
-The
A factory defines a set of CreateResource methods that can produce the following drawing resources:
To create an
A Direct2D resource that wraps a WMF, EMF, or EMF+ metafile.
-Gets the bounds of the metafile, in device-independent pixels (DIPs), as reported in the metafile?s header.
-This method streams the contents of the command to the given metafile sink.
-The sink into which Direct2D will call back.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
Gets the bounds of the metafile, in device-independent pixels (DIPs), as reported in the metafile?s header.
-The bounds, in DIPs, of the metafile.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This interface performs all the same functions as the existing
Gets the bounds of the metafile in source space in DIPs. This corresponds to the frame rect in an EMF/EMF+.
-Gets the DPI reported by the metafile.
-Receives the horizontal DPI reported by the metafile.
Receives the vertical DPI reported by the metafile.
If this method succeeds, it returns
Gets the bounds of the metafile in source space in DIPs. This corresponds to the frame rect in an EMF/EMF+.
-The bounds, in DIPs, of the metafile.
A developer implemented interface that allows a metafile to be replayed.
-This interface performs all the same functions as the existing
This interface performs all the same functions as the existing
Provides access to metafile records, including their type, data, and flags.
-The type of metafile record being processed. Please see MS-EMF and MS-EMFPLUS for a list of record types.
The data contained in this record. Please see MS-EMF and MS-EMFPLUS for information on record data layouts.
TThe size of the data pointed to by recordData.
The set of flags set for this record. Please see MS-EMF and MS-EMFPLUS for information on record flags.
For details on the EMF and EMF+ formats, please see Microsoft technical documents MS-EMF and MS-EMFPLUS.
-A developer implemented interface that allows a metafile to be replayed.
-This method is called once for each record stored in a metafile.
-The type of the record.
The data for the record.
The byte size of the record data.
Return true if the record is successfully.
Represents a geometry resource and defines a set of helper methods for manipulating and measuring geometric shapes. Interfaces that inherit from
There are several types of Direct2D geometry objects: a simple geometry (
Direct2D geometries enable you to describe two-dimensional figures and also offer many uses, such as defining hit-test regions, clip regions, and even animation paths.
Direct2D geometries are immutable and device-independent resources created by
Gets the bounds of the geometry after it has been widened by the specified stroke width and style and transformed by the specified matrix.
-The amount by which to widen the geometry by stroking its outline.
The style of the stroke that widens the geometry.
A transform to apply to the geometry after the geometry is transformed and after the geometry has been stroked.
When this method returns, contains the bounds of the widened geometry. You must allocate storage for this parameter.
When this method returns, contains the bounds of the widened geometry. You must allocate storage for this parameter.
Determines whether the geometry's stroke contains the specified point given the specified stroke thickness, style, and transform.
-The point to test for containment.
The thickness of the stroke to apply.
The style of stroke to apply.
The transform to apply to the stroked geometry.
The numeric accuracy with which the precise geometric path and path intersection is calculated. Points missing the stroke by less than the tolerance are still considered inside. Smaller values produce more accurate results but cause slower execution.
When this method returns, contains a boolean value set to true if the geometry's stroke contains the specified point; otherwise, false. You must allocate storage for this parameter.
Indicates whether the area filled by the geometry would contain the specified point given the specified flattening tolerance.
-The point to test.
The transform to apply to the geometry prior to testing for containment, or
The numeric accuracy with which the precise geometric path and path intersection is calculated. Points missing the fill by less than the tolerance are still considered inside. Smaller values produce more accurate results but cause slower execution.
When this method returns, contains a
Describes the intersection between this geometry and the specified geometry. The comparison is performed by using the specified flattening tolerance.
-The geometry to test.
The transform to apply to inputGeometry, or
The maximum error allowed when constructing a polygonal approximation of the geometry. No point in the polygonal representation will diverge from the original geometry by more than the flattening tolerance. Smaller values produce more accurate results but cause slower execution.
When this method returns, contains a reference to a value that describes how this geometry is related to inputGeometry. You must allocate storage for this parameter.
When interpreting the returned relation value, it is important to remember that the member
For more information about how to interpret other possible return values, see
Creates a simplified version of the geometry that contains only lines and (optionally) cubic Bezier curves and writes the result to an
If this method succeeds, it returns
Creates a set of clockwise-wound triangles that cover the geometry after it has been transformed using the specified matrix and flattened using the default tolerance.
-The transform to apply to this geometry.
The
The
If this method succeeds, it returns
Combines this geometry with the specified geometry and stores the result in an
If this method succeeds, it returns
Computes the outline of the geometry and writes the result to an
If this method succeeds, it returns
Computes the area of the geometry after it has been transformed by the specified matrix and flattened using the specified tolerance.
-The transform to apply to this geometry before computing its area.
The maximum error allowed when constructing a polygonal approximation of the geometry. No point in the polygonal representation will diverge from the original geometry by more than the flattening tolerance. Smaller values produce more accurate results but cause slower execution.
When this method returns, contains a reference to the area of the transformed, flattened version of this geometry. You must allocate storage for this parameter.
Calculates the point and tangent vector at the specified distance along the geometry after it has been transformed by the specified matrix and flattened using the default tolerance.
-The distance along the geometry of the point and tangent to find. If this distance is less then 0, this method calculates the first point in the geometry. If this distance is greater than the length of the geometry, this method calculates the last point in the geometry.
The transform to apply to the geometry before calculating the specified point and tangent.
The location at the specified distance along the geometry. If the geometry is empty, this point contains NaN as its x and y values.
When this method returns, contains a reference to the tangent vector at the specified distance along the geometry. If the geometry is empty, this vector contains NaN as its x and y values. You must allocate storage for this parameter.
The location at the specified distance along the geometry. If the geometry is empty, this point contains NaN as its x and y values.
Widens the geometry by the specified stroke and writes the result to an
If this method succeeds, it returns
Represents a composite geometry, composed of other
Geometry groups are a convenient way to group several geometries simultaneously so all figures of several distinct geometries are concatenated into one.
-Indicates how the intersecting areas of the geometries contained in this geometry group are combined.
-Indicates the number of geometry objects in the geometry group.
-Indicates how the intersecting areas of the geometries contained in this geometry group are combined.
-A value that indicates how the intersecting areas of the geometries contained in this geometry group are combined.
Indicates the number of geometry objects in the geometry group.
-The number of geometries in the
Retrieves the geometries in the geometry group.
-When this method returns, contains the address of a reference to an array of geometries to be filled by this method. The length of the array is specified by the geometryCount parameter. If the array is
A value indicating the number of geometries to return in the geometries array. If this value is less than the number of geometries in the geometry group, the remaining geometries are omitted. If this value is larger than the number of geometries in the geometry group, the extra geometries are set to
The returned geometries are referenced and counted, and the caller must release them.
-Encapsulates a device- and transform-dependent representation of a filled or stroked geometry. Callers should consider creating a geometry realization when they wish to accelerate repeated rendering of a given geometry. This interface exposes no methods.
-Encapsulates a device- and transform-dependent representation of a filled or stroked geometry. Callers should consider creating a geometry realization when they wish to accelerate repeated rendering of a given geometry. This interface exposes no methods.
-Creates a device-dependent representation of the fill of the geometry that can be subsequently rendered.
-The geometry to realize.
The flattening tolerance to use when converting Beziers to line segments. This parameter shares the same units as the coordinates of the geometry.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This method is used in conjunction with
If the provided stroke style specifies a stroke transform type other than
Creates a device-dependent representation of the stroke of a geometry that can be subsequently rendered.
-The geometry to realize.
The flattening tolerance to use when converting Beziers to line segments. This parameter shares the same units as the coordinates of the geometry.
The width of the stroke. This parameter shares the same units as the coordinates of the geometry.
The stroke style (optional).
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid value was passed to the method. |
?
This method is used in conjunction with
If the provided stroke style specifies a stroke transform type other than
Describes a geometric path that can contain lines, arcs, cubic Bezier curves, and quadratic Bezier curves.
-The
A geometry sink consists of one or more figures. Each figure is made up of one or more line, curve, or arc segments. To create a figure, call the BeginFigure method, specify the figure's start point, and then use its Add methods (such as AddLine and AddBezier) to add segments. When you are finished adding segments, call the EndFigure method. You can repeat this sequence to create additional figures. When you are finished creating figures, call the Close method.
-Describes a geometric path that can contain lines, arcs, cubic Bezier curves, and quadratic Bezier curves.
-The
A geometry sink consists of one or more figures. Each figure is made up of one or more line, curve, or arc segments. To create a figure, call the BeginFigure method, specify the figure's start point, and then use its Add methods (such as AddLine and AddBezier) to add segments. When you are finished adding segments, call the EndFigure method. You can repeat this sequence to create additional figures. When you are finished creating figures, call the Close method.
-Creates a line segment between the current point and the specified end point and adds it to the geometry sink.
-The end point of the line to draw.
Creates a quadratic Bezier curve between the current point and the specified end point.
-A structure that describes the control point and the end point of the quadratic Bezier curve to add.
Adds a sequence of quadratic Bezier segments as an array in a single call.
-An array of a sequence of quadratic Bezier segments.
A value indicating the number of quadratic Bezier segments in beziers.
Describes a geometric path that can contain lines, arcs, cubic Bezier curves, and quadratic Bezier curves.
-The
A geometry sink consists of one or more figures. Each figure is made up of one or more line, curve, or arc segments. To create a figure, call the BeginFigure method, specify the figure's start point, and then use its Add methods (such as AddLine and AddBezier) to add segments. When you are finished adding segments, call the EndFigure method. You can repeat this sequence to create additional figures. When you are finished creating figures, call the Close method.
-Represents a device-dependent representation of a gradient mesh composed of patches. Use the
Returns the number of patches that make up this gradient mesh.
-Returns the number of patches that make up this gradient mesh.
-Returns the number of patches that make up this gradient mesh.
Returns a subset of the patches that make up this gradient mesh.
-Index of the first patch to return.
A reference to the array to be filled with the patch data.
The number of patches to be returned.
Represents an collection of
Retrieves the number of gradient stops in the collection.
-Indicates the gamma space in which the gradient stops are interpolated.
-Indicates the behavior of the gradient outside the normalized gradient range.
-Retrieves the number of gradient stops in the collection.
-The number of gradient stops in the collection.
Copies the gradient stops from the collection into an array of
Gradient stops are copied in order of position, starting with the gradient stop with the smallest position value and progressing to the gradient stop with the largest position value.
-Indicates the gamma space in which the gradient stops are interpolated.
-The gamma space in which the gradient stops are interpolated.
Indicates the behavior of the gradient outside the normalized gradient range.
-The behavior of the gradient outside the [0,1] normalized gradient range.
Represents a collection of
Gets the color space of the input colors as well as the space in which gradient stops are interpolated.
-If this object was created using
Gets the color space after interpolation has occurred.
-If you create using
Gets the precision of the gradient buffer.
-If this object was created using
Retrieves the color interpolation mode that the gradient stop collection uses.
-Copies the gradient stops from the collection into memory.
-When this method returns, contains a reference to a one-dimensional array of
The number of gradient stops to copy.
If the
If gradientStopsCount is less than the number of gradient stops in the collection, the remaining gradient stops are omitted. If gradientStopsCount is larger than the number of gradient stops in the collection, the extra gradient stops are set to
Gets the color space of the input colors as well as the space in which gradient stops are interpolated.
-This method returns the color space.
If this object was created using
Gets the color space after interpolation has occurred.
-This method returns the color space.
If you create using
Gets the precision of the gradient buffer.
-The buffer precision of the gradient buffer.
If this object was created using
Retrieves the color interpolation mode that the gradient stop collection uses.
-The color interpolation mode.
Represents a producer of pixels that can fill an arbitrary 2D plane.
-An
Images are evaluated lazily. If the type of image passed in is concrete, then the image can be directly sampled from. Other images can act only as a source of pixels and can produce content only as a result of calling
Represents a brush based on an
Gets or sets the image associated with the image brush.
-Gets or sets the extend mode of the image brush on the x-axis.
-Gets or sets the extend mode of the image brush on the y-axis of the image.
-Gets or sets the interpolation mode of the image brush.
-Gets or sets the rectangle that will be used as the bounds of the image when drawn as an image brush.
-Sets the image associated with the provided image brush.
-The image to be associated with the image brush.
Sets how the content inside the source rectangle in the image brush will be extended on the x-axis.
-The extend mode on the x-axis of the image.
Sets the extend mode on the y-axis.
-The extend mode on the y-axis of the image.
Sets the interpolation mode for the image brush.
-How the contents of the image will be interpolated to handle the brush transform.
Sets the source rectangle in the image brush.
-The source rectangle that defines the portion of the image to tile.
The top left corner of the sourceRectangle parameter maps to the brush space origin. That is, if the brush and world transforms are both identity, the portion of the image in the top left corner of the source rectangle will be rendered at (0,0) in the render target.
The source rectangle will be expanded differently depending on whether the input image is based on pixels (a bitmap or effect) or by a command list.
Gets the image associated with the image brush.
-When this method returns, contains the address of a reference to the image associated with this brush.
Gets the extend mode of the image brush on the x-axis.
-This method returns the x-extend mode.
Gets the extend mode of the image brush on the y-axis of the image.
-This method returns the y-extend mode.
Gets the interpolation mode of the image brush.
-This method returns the interpolation mode.
Gets the rectangle that will be used as the bounds of the image when drawn as an image brush.
-When this method returns, contains the address of the output source rectangle.
Represents a producer of pixels that can fill an arbitrary 2D plane.
-Allows the operating system to free the video memory of resources by discarding their content.
-OfferResources returns:
Restores access to resources that were previously offered by calling OfferResources.
-ReclaimResources returns:
After you call OfferResources to offer one or more resources, - you must call TryReclaimResources before you can use those resources again. - You must check the value in the resourcesDiscarded to determine whether the resource?s content was discarded. - If a resource?s content was discarded while it was offered, its current content is undefined. Therefore, you must overwrite the resource?s content before you use the resource.
-Produces 2D pixel data that has been sourced from WIC.
- Create an an instance of
Retrieves the underlying bitmap image source from the Windows Imaging Component (WIC).
-Ensures that a specified region of the image source cache is populated. This method can be used to minimize glitches by performing expensive work to populate caches outside of a rendering loop. This method can also be used to speculatively load image data before it is needed by drawing routines.
-Specifies the region of the image, in pixels, that should be populated in the cache. By default, this is the entire extent of the image.
If this method succeeds, it returns
This API loads image data into caches of image sources, if that data was not already cached. It does not trim pre-existing caches, if any. More areas within the cache can be populated than actually requested.
?
The provided region must be constructed to include the scale with which the image source will subsequently be drawn. These coordinates must be provided in local coordinates. This means that they must be adjusted prior to calling the API according to the DPI and other relevant transforms, which can include the world transform and brush transforms.
This operation is only supported when the image source has been initialized using the
This method trims the populated regions of the image source cache to just the specified rectangle.
-Specifies the region of the image, in pixels, which should be preserved in the image source cache. Regions which are outside of the rectangle are evicted from the cache. By default, this is an empty rectangle, meaning that the entire image is evicted from the cache.
If this method succeeds, it returns
The provided region must be constructed to include the scale at which the image source will be drawn at. These coordinates must be provided in local coordinates. This means that they must be adjusted prior to calling the API according to the DPI and other relevant transforms, which can include the world transform and brush transforms.
?
This method will fail if on-demand caching was not requested when the image source was created.
?
As with
This operation is only supported when the image source has been initialized using the
Retrieves the underlying bitmap image source from the Windows Imaging Component (WIC).
-On return contains the bitmap image source.
Represents a single continuous stroke of variable-width ink, as defined by a series of Bezier segments and widths.
-Retrieves or sets the starting point for this ink object.
-Updates the last segment in this ink object with new control points.
-Returns the number of segments in this ink object.
-Sets the starting point for this ink object. This determines where this ink object will start rendering.
-The new starting point for this ink object.
Retrieves the starting point for this ink object.
-The starting point for this ink object.
Adds the given segments to the end of this ink object.
-A reference to an array of segments to be added to this ink object.
The number of segments to be added to this ink object.
If this method succeeds, it returns
Removes the given number of segments from the end of this ink object.
-The number of segments to be removed from the end of this ink object. Note that segmentsCount must be less or equal to the number of segments in the ink object.
If this method succeeds, it returns
Updates the specified segments in this ink object with new control points.
-The index of the first segment in this ink object to update.
A reference to the array of segment data to be used in the update.
The number of segments in this ink object that will be updated with new data. Note that segmentsCount must be less than or equal to the number of segments in the ink object minus startSegment.
If this method succeeds, it returns
Updates the last segment in this ink object with new control points.
-A reference to the segment data with which to overwrite this ink object's last segment. Note that if there are currently no segments in the ink object, SetSegmentsAtEnd will return an error.
If this method succeeds, it returns
Returns the number of segments in this ink object.
-Returns the number of segments in this ink object.
Retrieves the specified subset of segments stored in this ink object.
-The index of the first segment in this ink object to retrieve.
When this method returns, contains a reference to an array of retrieved segments.
The number of segments to retrieve. Note that segmentsCount must be less than or equal to the number of segments in the ink object minus startSegment.
If this method succeeds, it returns
Retrieves a geometric representation of this ink object.
-The ink style to be used in determining the geometric representation.
The world transform to be used in determining the geometric representation.
The flattening tolerance to be used in determining the geometric representation.
The geometry sink to which the geometry representation will be streamed.
If this method succeeds, it returns
Retrieve the bounds of the geometry, with an optional applied transform.
-The ink style to be used in determining the bounds of this ink object.
The world transform to be used in determining the bounds of this ink object.
When this method returns, contains the bounds of this ink object.
If this method succeeds, it returns
Represents a collection of style properties to be used by methods like
Retrieves or sets the transform to be applied to this style's nib shape.
-Retrieves or sets the pre-transform nib shape for this style.
-Sets the transform to apply to this style's nib shape.
-The transform to apply to this style?s nib shape. Note that the translation components of the transform matrix are ignored for the purposes of rendering.
Retrieves the transform to be applied to this style's nib shape.
-When this method returns, contains a reference to the transform to be applied to this style's nib shape.
Sets the pre-transform nib shape for this style.
-The pre-transform nib shape to use in this style.
Retrieves the pre-transform nib shape for this style.
-Returns the pre-transform nib shape for this style.
Represents the backing store required to render a layer.
-To create a layer, call the CreateLayer method of the render target where the layer will be used. To draw to a layer, push the layer to the render target stack by calling the PushLayer method. After you have finished drawing to the layer, call the PopLayer method.
Between PushLayer and PopLayer calls, the layer is in use and cannot be used by another render target.
If the size of the layer is not specified, the corresponding PushLayer call determines the minimum layer size, based on the layer content bounds and the geometric mask. The layer resource can be larger than the size required by PushLayer without any rendering artifacts.
If the size of a layer is specified, or if the layer has been used and the required backing store size as calculated during PushLayer is larger than the layer, then the layer resource is expanded on each axis monotonically to ensure that it is large enough. The layer resource never shrinks in size.
-Gets the size of the layer in device-independent pixels.
-Gets the size of the layer in device-independent pixels.
-The size of the layer in device-independent pixels.
Paints an area with a linear gradient.
-An
The start point and end point are described in the brush space and are mappped to the render target when the brush is used. Note the starting and ending coordinates are absolute, not relative to the render target size. A value of (0, 0) maps to the upper-left corner of the render target, while a value of (1, 1) maps one pixel diagonally away from (0, 0). If there is a nonidentity brush transform or render target transform, the brush start point and end point are also transformed.
It is possible to specify a gradient axis that does not completely fill the area that is being painted. When this occurs, the
Retrieves or sets the starting coordinates of the linear gradient.
-The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
-Retrieves or sets the ending coordinates of the linear gradient.
-The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
- Retrieves the
Sets the starting coordinates of the linear gradient in the brush's coordinate space.
-The starting two-dimensional coordinates of the linear gradient, in the brush's coordinate space.
The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
-Sets the ending coordinates of the linear gradient in the brush's coordinate space.
-The ending two-dimensional coordinates of the linear gradient, in the brush's coordinate space.
The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
-Retrieves the starting coordinates of the linear gradient.
-The starting two-dimensional coordinates of the linear gradient, in the brush's coordinate space.
The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
-Retrieves the ending coordinates of the linear gradient.
-The ending two-dimensional coordinates of the linear gradient, in the brush's coordinate space.
The start point and end point are described in the brush's space and are mapped to the render target when the brush is used. If there is a non-identity brush transform or render target transform, the brush's start point and end point are also transformed.
- Retrieves the
A container for 3D lookup table data that can be passed to the LookupTable3D effect.
An ID2DLookupTable3D instance is created using
Represents a set of vertices that form a list of triangles.
-Opens the mesh for population.
-When this method returns, contains a reference to a reference to an
If this method succeeds, it returns
A locking mechanism from a Direct2D factory that Direct2D uses to control exclusive resource access in an app that is uses multiple threads.
- You can get an
You should use this lock while doing any operation on a Direct3D/DXGI surface. Direct2D will wait on any call until you leave the critical section.
Note?? Normal rendering is guarded automatically by an internal Direct2D lock.? - Returns whether the Direct2D factory was created with the
Returns whether the Direct2D factory was created with the
Returns true if the Direct2D factory was created as multi-threaded, or false if it was created as single-threaded.
Enters the Direct2D API critical section, if it exists.
-Leaves the Direct2D API critical section, if it exists.
-Instructs the effect-rendering system to offset an input bitmap without inserting a rendering pass.
-Because a rendering pass is not required, the interface derives from a transform node. This allows it to be inserted into a graph but does not allow an output buffer to be specified.
-Sets the offset in the current offset transform.
-The new offset to apply to the offset transform.
Gets the offset currently in the offset transform.
-The current transform offset.
Represents a complex shape that may be composed of arcs, curves, and lines.
-An
Retrieves the number of segments in the path geometry.
-Retrieves the number of figures in the path geometry.
-Retrieves the geometry sink that is used to populate the path geometry with figures and segments.
-When this method returns, geometrySink contains the address of a reference to the geometry sink that is used to populate the path geometry with figures and segments. This parameter is passed uninitialized.
Because path geometries are immutable and can only be populated once, it is an error to call Open on a path geometry more than once.
Note that the fill mode defaults to
Copies the contents of the path geometry to the specified
If this method succeeds, it returns
Retrieves the number of segments in the path geometry.
-A reference that receives the number of segments in the path geometry when this method returns. You must allocate storage for this parameter.
If this method succeeds, it returns
Retrieves the number of figures in the path geometry.
-A reference that receives the number of figures in the path geometry when this method returns. You must allocate storage for this parameter.
If this method succeeds, it returns
The
This interface adds functionality to
Computes the point that exists at a given distance along the path geometry along with the index of the segment the point is on and the directional vector at that point.
-The distance to walk along the path.
The index of the segment at which to begin walking. Note: This index is global to the entire path, not just a particular figure.
The transform to apply to the path prior to walking.
The flattening tolerance to use when walking along an arc or Bezier segment. The flattening tolerance is the maximum error allowed when constructing a polygonal approximation of the geometry. No point in the polygonal representation will diverge from the original geometry by more than the flattening tolerance. Smaller values produce more accurate results but cause slower execution.
When this method returns, contains a description of the point that can be found at the given location.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | One of the inputs was in an invalid range. |
?
Converts Direct2D primitives stored in an
Converts Direct2D primitives in the passed-in command list into a fixed page representation for use by the print subsystem.
-The command list that contains the rendering operations.
The size of the page to add.
The print ticket stream.
Contains the first label for subsequent drawing operations. This parameter is passed uninitialized. If
Contains the second label for subsequent drawing operations. This parameter is passed uninitialized. If
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
The print job is already finished. |
?
Passes all remaining resources to the print sub-system, then clean up and close the current print job.
-The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
The print job is already finished. |
?
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
TBD
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
TBD
-Represents a set of run-time bindable and discoverable properties that allow a data-driven application to modify the state of a Direct2D effect.
-This interface supports access through either indices or property names. In addition to top-level properties, each property in an
Gets the number of top-level properties.
-This method returns the number of custom properties on the
Gets the number of top-level properties.
-This method returns the number of custom (non-system) properties that can be accessed by the object.
This method returns the number of custom properties on the
Gets the number of characters for the given property name. This is a template overload. See Remarks.
-The index of the property name to retrieve.
This method returns the size in characters of the name corresponding to the given property index, or zero if the property index does not exist.
The value returned by this method can be used to ensure that the buffer size for GetPropertyName is appropriate.
template<typename U> UINT32 GetPropertyNameLength( U index ) CONST; -
Gets the
This method returns a
If the property does not exist, the method returns
Gets the index corresponding to the given property name.
-The name of the property to retrieve.
The index of the corresponding property name.
If the property does not exist, this method returns D2D1_INVALID_PROPERTY_INDEX. This reserved value will never map to a valid index and will cause
Sets the corresponding property by index. This is a template overload. See Remarks.
-The index of the property to set.
The data to set.
The method returns an
Description | |
---|---|
No error occurred. | |
The specified property does not exist. | |
E_OUTOFMEMORY | Failed to allocate necessary memory. |
D3DERR_OUT_OF_VIDEO_MEMORY | Failed to allocate required video memory. |
E_INVALIDARG | One or more arguments are invalid. |
E_FAIL | Unspecified failure. |
?
template<typename T, typename U>
Gets the property value by name. This is a template overload. See Remarks.
-The property name to get.
Returns the value requested.
If propertyName does not exist, no information is retrieved.
Any error not in the standard set returned by a property implementation will be mapped into the standard error range.
template<typename T> T GetValueByName( _In_ PCWSTR propertyName ) const; -
Gets the value of the property by index. This is a template overload. See Remarks.
-The index of the property from which the value is to be obtained.
Returns the value requested.
template<typename T, typename U> T GetValue( U index ) const; -
Gets the size of the property value in bytes, using the property index. This is a template overload. See Remarks.
-The index of the property.
This method returns size of the value in bytes, using the property index
This method returns zero if index does not exist.
template<typename U> UINT32 GetValueSize( U index ) CONST; -
Gets the sub-properties of the provided property by index. This is a template overload. See Remarks.
-The index of the sub-properties to be retrieved.
When this method returns, contains the address of a reference to the sub-properties.
If there are no sub-properties, subProperties will be
template<typename U>
Paints an area with a radial gradient.
-The
The brush maps the gradient stop position 0.0f of the gradient origin, and the position 1.0f is mapped to the ellipse boundary. When the gradient origin is within the ellipse, the contents of the ellipse enclose the entire [0, 1] range of the brush gradient stops. If the gradient origin is outside the bounds of the ellipse, the brush still works, but its gradient is not well-defined.
The start point and end point are described in the brush space and are mappped to the render target when the brush is used. Note the starting and ending coordinates are absolute, not relative to the render target size. A value of (0, 0) maps to the upper-left corner of the render target, while a value of (1, 1) maps just one pixel diagonally away from (0, 0). If there is a nonidentity brush transform or render target transform, the brush ellipse and gradient origin are also transformed.
It is possible to specify an ellipse that does not completely fill area being painted. When this occurs, the
Retrieves or sets the center of the gradient ellipse.
-Retrieves or sets the offset of the gradient origin relative to the gradient ellipse's center.
-Retrieves or sets the x-radius of the gradient ellipse.
-Retrieves or sets the y-radius of the gradient ellipse.
-Retrieves the
Specifies the center of the gradient ellipse in the brush's coordinate space.
-The center of the gradient ellipse, in the brush's coordinate space.
Specifies the offset of the gradient origin relative to the gradient ellipse's center.
-The offset of the gradient origin from the center of the gradient ellipse.
Specifies the x-radius of the gradient ellipse, in the brush's coordinate space.
-The x-radius of the gradient ellipse. This value is in the brush's coordinate space.
Specifies the y-radius of the gradient ellipse, in the brush's coordinate space.
-The y-radius of the gradient ellipse. This value is in the brush's coordinate space.
Retrieves the center of the gradient ellipse.
-The center of the gradient ellipse. This value is expressed in the brush's coordinate space.
Retrieves the offset of the gradient origin relative to the gradient ellipse's center.
-The offset of the gradient origin from the center of the gradient ellipse. This value is expressed in the brush's coordinate space.
Retrieves the x-radius of the gradient ellipse.
-The x-radius of the gradient ellipse. This value is expressed in the brush's coordinate space.
Retrieves the y-radius of the gradient ellipse.
-The y-radius of the gradient ellipse. This value is expressed in the brush's coordinate space.
Retrieves the
Describes a two-dimensional rectangle.
-Retrieves the rectangle that describes the rectangle geometry's dimensions.
-Retrieves the rectangle that describes the rectangle geometry's dimensions.
-Contains a reference to a rectangle that describes the rectangle geometry's dimensions when this method returns. You must allocate storage for this parameter.
Describes the render information common to all of the various transform implementations.
-This interface is used by a transform implementation to first describe and then indicate changes to the rendering pass that corresponds to the transform.
-Specifies that the output of the transform in which the render information is encapsulated is or is not cached.
-Provides an estimated hint of shader execution cost to D2D.
-The instruction count may be set according to the number of instructions in the shader. This information is used as a hint when rendering extremely large images. Calling this API is optional, but it may improve performance if you provide an accurate number.
Note??Instructions that occur in a loop should be counted according to the number of loop iterations.? -Sets how a specific input to the transform should be handled by the renderer in terms of sampling.
-The index of the input that will have the input description applied.
The description of the input to be applied to the transform.
The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
The input description must be matched correctly by the effect shader code.
-Allows a caller to control the output precision and channel-depth of the transform in which the render information is encapsulated.
-The type of buffer that should be used as an output from this transform.
The number of channels that will be used on the output buffer.
If the method succeeds, it returns
If the output precision of the transform is not specified, then it will default to the precision specified on the Direct2D device context. The maximum of 16bpc UNORM and 16bpc FLOAT is 32bpc FLOAT.
The output channel depth will match the maximum of the input channel depths if the channel depth is
There is no global output channel depth, this is always left to the control of the transforms.
-Specifies that the output of the transform in which the render information is encapsulated is or is not cached.
-TRUE if the output of the transform is cached; otherwise,
Provides an estimated hint of shader execution cost to D2D.
-An approximate instruction count of the associated shader.
The instruction count may be set according to the number of instructions in the shader. This information is used as a hint when rendering extremely large images. Calling this API is optional, but it may improve performance if you provide an accurate number.
Note??Instructions that occur in a loop should be counted according to the number of loop iterations.? -Represents an object that can receive drawing commands. Interfaces that inherit from
Your application should create render targets once and hold onto them for the life of the application or until the render target's EndDraw method returns the
Gets or sets the current transform of the render target.
-Retrieves or sets the current antialiasing mode for nontext drawing operations.
-Gets or sets the current antialiasing mode for text and glyph drawing operations.
-Retrieves or sets the render target's current text rendering options.
-If the settings specified by textRenderingParams are incompatible with the render target's text antialiasing mode (specified by SetTextAntialiasMode), subsequent text and glyph drawing operations will fail and put the render target into an error state.
-Retrieves the pixel format and alpha mode of the render target.
-Returns the size of the render target in device-independent pixels.
-Returns the size of the render target in device pixels.
-Gets the maximum size, in device-dependent units (pixels), of any one bitmap dimension supported by the render target.
-This method returns the maximum texture size of the Direct3D device.
Note??The software renderer and WARP devices return the value of 16 megapixels (16*1024*1024). You can create a Direct2D texture that is this size, but not a Direct3D texture that is this size.? -Creates a Direct2D bitmap from a reference to in-memory source data.
-The dimension of the bitmap to create in pixels.
A reference to the memory location of the image data, or
The byte count of each scanline, which is equal to (the image width in pixels ? the number of bytes per pixel) + memory padding. If srcData is
The pixel format and dots per inch (DPI) of the bitmap to create.
When this method returns, contains a reference to a reference to the new bitmap. This parameter is passed uninitialized.
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Before Direct2D can load a WIC bitmap, that bitmap must be converted to a supported pixel format and alpha mode. For a list of supported pixel formats and alpha modes, see Supported Pixel Formats and Alpha Modes.
-Creates an
If this method succeeds, it returns
The CreateSharedBitmap method is useful for efficiently reusing bitmap data and can also be used to provide interoperability with Direct3D.
-Creates an
If this method succeeds, it returns
Creates a new
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates a new bitmap render target for use during intermediate offscreen drawing that is compatible with the current render target and has the same size, DPI, and pixel format (but not alpha mode) as the current render target.
-When this method returns, contains a reference to a reference to a new bitmap render target. This parameter is passed uninitialized.
If this method succeeds, it returns
The bitmap render target created by this method is not compatible with GDI and has an alpha mode of
Creates a layer resource that can be used with this render target and its compatible render targets. The new layer has the specified initial size.
-If (0, 0) is specified, no backing store is created behind the layer resource. The layer resource is allocated to the minimum size when PushLayer is called.
When the method returns, contains a reference to a reference to the new layer. This parameter is passed uninitialized.
If this method succeeds, it returns
Regardless of whether a size is initially specified, the layer automatically resizes as needed.
-Create a mesh that uses triangles to describe a shape.
-When this method returns, contains a reference to a reference to the new mesh.
If this method succeeds, it returns
To populate a mesh, use its Open method to obtain an
Draws a line between the specified points using the specified stroke style.
-The start point of the line, in device-independent pixels.
The end point of the line, in device-independent pixels.
The brush used to paint the line's stroke.
The width of the stroke, in device-independent pixels. The value must be greater than or equal to 0.0f. If this parameter isn't specified, it defaults to 1.0f. The stroke is centered on the line.
The style of stroke to paint, or
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawLine) failed, check the result returned by the
Draws the outline of a rectangle that has the specified dimensions and stroke style.
-The dimensions of the rectangle to draw, in device-independent pixels.
The brush used to paint the rectangle's stroke.
The width of the stroke, in device-independent pixels. The value must be greater than or equal to 0.0f. If this parameter isn't specified, it defaults to 1.0f. The stroke is centered on the line.
The style of stroke to paint, or
When this method fails, it does not return an error code. To determine whether a drawing method (such as DrawRectangle) failed, check the result returned by the
Paints the interior of the specified rectangle.
-The dimension of the rectangle to paint, in device-independent pixels.
The brush used to paint the rectangle's interior.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as FillRectangle) failed, check the result returned by the
Draws the outline of the specified rounded rectangle using the specified stroke style.
-The dimensions of the rounded rectangle to draw, in device-independent pixels.
The brush used to paint the rounded rectangle's outline.
The width of the stroke, in device-independent pixels. The value must be greater than or equal to 0.0f. If this parameter isn't specified, it defaults to 1.0f. The stroke is centered on the line.
The style of the rounded rectangle's stroke, or
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawRoundedRectangle) failed, check the result returned by the
Paints the interior of the specified rounded rectangle.
-The dimensions of the rounded rectangle to paint, in device-independent pixels.
The brush used to paint the interior of the rounded rectangle.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as FillRoundedRectangle) failed, check the result returned by the
Draws the outline of the specified ellipse using the specified stroke style.
-The position and radius of the ellipse to draw, in device-independent pixels.
The brush used to paint the ellipse's outline.
The width of the stroke, in device-independent pixels. The value must be greater than or equal to 0.0f. If this parameter isn't specified, it defaults to 1.0f. The stroke is centered on the line.
The style of stroke to apply to the ellipse's outline, or
The DrawEllipse method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawEllipse) failed, check the result returned by the
Paints the interior of the specified ellipse.
-The position and radius, in device-independent pixels, of the ellipse to paint.
The brush used to paint the interior of the ellipse.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as FillEllipse) failed, check the result returned by the
Draws the outline of the specified geometry using the specified stroke style.
-The geometry to draw.
The brush used to paint the geometry's stroke.
The width of the stroke, in device-independent pixels. The value must be greater than or equal to 0.0f. If this parameter isn't specified, it defaults to 1.0f. The stroke is centered on the line.
The style of stroke to apply to the geometry's outline, or
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawGeometry) failed, check the result returned by the
Paints the interior of the specified geometry.
-The geometry to paint.
The brush used to paint the geometry's interior.
The opacity mask to apply to the geometry, or
If the opacityBrush parameter is not
When this method fails, it does not return an error code. To determine whether a drawing operation (such as FillGeometry) failed, check the result returned by the
Paints the interior of the specified mesh.
-The mesh to paint.
The brush used to paint the mesh.
The current antialias mode of the render target must be
FillMesh does not expect a particular winding order for the triangles in the
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as FillMesh) failed, check the result returned by the
For this method to work properly, the render target must be using the
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as FillOpacityMask) failed, check the result returned by the
Draws the specified bitmap after scaling it to the size of the specified rectangle.
-The bitmap to render.
The size and position, in device-independent pixels in the render target's coordinate space, of the area to which the bitmap is drawn. If the rectangle is not well-ordered, nothing is drawn, but the render target does not enter an error state.
A value between 0.0f and 1.0f, inclusive, that specifies the opacity value to be applied to the bitmap; this value is multiplied against the alpha values of the bitmap's contents. Default is 1.0f.
The interpolation mode to use if the bitmap is scaled or rotated by the drawing operation. The default value is
The size and position, in device-independent pixels in the bitmap's coordinate space, of the area within the bitmap to draw;
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawBitmap) failed, check the result returned by the
Draws the specified text using the format information provided by an
To create an
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawText) failed, check the result returned by the
Draws the formatted text described by the specified
When drawing the same text repeatedly, using the DrawTextLayout method is more efficient than using the DrawText method because the text doesn't need to be formatted and the layout processed with each call.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawTextLayout) failed, check the result returned by the
Draws the specified glyphs.
-The origin, in device-independent pixels, of the glyphs' baseline.
The glyphs to render.
The brush used to paint the specified glyphs.
A value that indicates how glyph metrics are used to measure text when it is formatted. The default value is
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as DrawGlyphRun) failed, check the result returned by the
Gets the current transform of the render target.
-When this returns, contains the current transform of the render target. This parameter is passed uninitialized.
Sets the antialiasing mode of the render target. The antialiasing mode applies to all subsequent drawing operations, excluding text and glyph drawing operations.
-The antialiasing mode for future drawing operations.
To specify the antialiasing mode for text and glyph operations, use the SetTextAntialiasMode method.
-Retrieves the current antialiasing mode for nontext drawing operations.
-The current antialiasing mode for nontext drawing operations.
Specifies the antialiasing mode to use for subsequent text and glyph drawing operations.
-The antialiasing mode to use for subsequent text and glyph drawing operations.
Gets the current antialiasing mode for text and glyph drawing operations.
-The current antialiasing mode for text and glyph drawing operations.
Specifies text rendering options to be applied to all subsequent text and glyph drawing operations.
-The text rendering options to be applied to all subsequent text and glyph drawing operations;
If the settings specified by textRenderingParams are incompatible with the render target's text antialiasing mode (specified by SetTextAntialiasMode), subsequent text and glyph drawing operations will fail and put the render target into an error state.
-Retrieves the render target's current text rendering options.
-When this method returns, textRenderingParamscontains the address of a reference to the render target's current text rendering options.
If the settings specified by textRenderingParams are incompatible with the render target's text antialiasing mode (specified by SetTextAntialiasMode), subsequent text and glyph drawing operations will fail and put the render target into an error state.
-Specifies a label for subsequent drawing operations.
-A label to apply to subsequent drawing operations.
A label to apply to subsequent drawing operations.
The labels specified by this method are printed by debug error messages. If no tag is set, the default value for each tag is 0.
-Gets the label for subsequent drawing operations.
-When this method returns, contains the first label for subsequent drawing operations. This parameter is passed uninitialized. If
When this method returns, contains the second label for subsequent drawing operations. This parameter is passed uninitialized. If
If the same address is passed for both parameters, both parameters receive the value of the second tag.
-Adds the specified layer to the render target so that it receives all subsequent drawing operations until PopLayer is called.
-The PushLayer method allows a caller to begin redirecting rendering to a layer. All rendering operations are valid in a layer. The location of the layer is affected by the world transform set on the render target.
Each PushLayer must have a matching PopLayer call. If there are more PopLayer calls than PushLayer calls, the render target is placed into an error state. If Flush is called before all outstanding layers are popped, the render target is placed into an error state, and an error is returned. The error state can be cleared by a call to EndDraw.
A particular
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as PushLayer) failed, check the result returned by the
Stops redirecting drawing operations to the layer that is specified by the last PushLayer call.
-A PopLayer must match a previous PushLayer call.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as PopLayer) failed, check the result returned by the
Executes all pending drawing commands.
-When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
If the method succeeds, it returns
This command does not flush the Direct3D device context that is associated with the render target.
Calling this method resets the error state of the render target.
-Saves the current drawing state to the specified
Sets the render target's drawing state to that of the specified
Specifies a rectangle to which all subsequent drawing operations are clipped.
-The size and position of the clipping area, in device-independent pixels.
The antialiasing mode that is used to draw the edges of clip rects that have subpixel boundaries, and to blend the clip with the scene contents. The blending is performed once when the PopAxisAlignedClip method is called, and does not apply to each primitive within the layer.
The clipRect is transformed by the current world transform set on the render target. After the transform is applied to the clipRect that is passed in, the axis-aligned bounding box for the clipRect is computed. For efficiency, the contents are clipped to this axis-aligned bounding box and not to the original clipRect that is passed in.
The following diagrams show how a rotation transform is applied to the render target, the resulting clipRect, and a calculated axis-aligned bounding box.
Assume the rectangle in the following illustration is a render target that is aligned to the screen pixels.
Apply a rotation transform to the render target. In the following illustration, the black rectangle represents the original render target and the red dashed rectangle represents the transformed render target.
After calling PushAxisAlignedClip, the rotation transform is applied to the clipRect. In the following illustration, the blue rectangle represents the transformed clipRect.
The axis-aligned bounding box is calculated. The green dashed rectangle represents the bounding box in the following illustration. All contents are clipped to this axis-aligned bounding box.
The PushAxisAlignedClip and PopAxisAlignedClip must match. Otherwise, the error state is set. For the render target to continue receiving new commands, you can call Flush to clear the error.
A PushAxisAlignedClip and PopAxisAlignedClip pair can occur around or within a PushLayer and PopLayer, but cannot overlap. For example, the sequence of PushAxisAlignedClip, PushLayer, PopLayer, PopAxisAlignedClip is valid, but the sequence of PushAxisAlignedClip, PushLayer, PopAxisAlignedClip, PopLayer is invalid.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as PushAxisAlignedClip) failed, check the result returned by the
Removes the last axis-aligned clip from the render target. After this method is called, the clip is no longer applied to subsequent drawing operations.
-A PushAxisAlignedClip/PopAxisAlignedClip pair can occur around or within a PushLayer/PopLayer pair, but may not overlap. For example, a PushAxisAlignedClip, PushLayer, PopLayer, PopAxisAlignedClip sequence is valid, but a PushAxisAlignedClip, PushLayer, PopAxisAlignedClip, PopLayer sequence is not.
PopAxisAlignedClip must be called once for every call to PushAxisAlignedClip.
For an example, see How to Clip with an Axis-Aligned Clip Rectangle.
This method doesn't return an error code if it fails. To determine whether a drawing operation (such as PopAxisAlignedClip) failed, check the result returned by the
Clears the drawing area to the specified color.
-The color to which the drawing area is cleared, or
Direct2D interprets the clearColor as straight alpha (not premultiplied). If the render target's alpha mode is
If the render target has an active clip (specified by PushAxisAlignedClip), the clear command is applied only to the area within the clip region.
-Initiates drawing on this render target.
-Drawing operations can only be issued between a BeginDraw and EndDraw call.
BeginDraw and EndDraw are used to indicate that a render target is in use by the Direct2D system. Different implementations of
The BeginDraw method must be called before rendering operations can be called, though state-setting and state-retrieval operations can be performed even outside of BeginDraw/EndDraw.
After BeginDraw is called, a render target will normally build up a batch of rendering commands, but defer processing of these commands until either an internal buffer is full, the Flush method is called, or until EndDraw is called. The EndDraw method causes any batched drawing operations to complete, and then returns an
If EndDraw is called without a matched call to BeginDraw, it returns an error indicating that BeginDraw must be called before EndDraw. Calling BeginDraw twice on a render target puts the target into an error state where nothing further is drawn, and returns an appropriate
Ends drawing operations on the render target and indicates the current error state and associated tags.
-When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
If the method succeeds, it returns
Drawing operations can only be issued between a BeginDraw and EndDraw call.
BeginDraw and EndDraw are use to indicate that a render target is in use by the Direct2D system. Different implementations of
The BeginDraw method must be called before rendering operations can be called, though state-setting and state-retrieval operations can be performed even outside of BeginDraw/EndDraw.
After BeginDraw is called, a render target will normally build up a batch of rendering commands, but defer processing of these commands until either an internal buffer is full, the Flush method is called, or until EndDraw is called. The EndDraw method causes any batched drawing operations to complete, and then returns an
If EndDraw is called without a matched call to BeginDraw, it returns an error indicating that BeginDraw must be called before EndDraw. Calling BeginDraw twice on a render target puts the target into an error state where nothing further is drawn, and returns an appropriate
Retrieves the pixel format and alpha mode of the render target.
-The pixel format and alpha mode of the render target.
Sets the dots per inch (DPI) of the render target.
-A value greater than or equal to zero that specifies the horizontal DPI of the render target.
A value greater than or equal to zero that specifies the vertical DPI of the render target.
This method specifies the mapping from pixel space to device-independent space for the render target. If both dpiX and dpiY are 0, the factory-read system DPI is chosen. If one parameter is zero and the other unspecified, the DPI is not changed.
For
Return the render target's dots per inch (DPI).
-When this method returns, contains the horizontal DPI of the render target. This parameter is passed uninitialized.
When this method returns, contains the vertical DPI of the render target. This parameter is passed uninitialized.
This method indicates the mapping from pixel space to device-independent space for the render target.
For
Returns the size of the render target in device-independent pixels.
-The current size of the render target in device-independent pixels.
Returns the size of the render target in device pixels.
-The size of the render target in device pixels.
Gets the maximum size, in device-dependent units (pixels), of any one bitmap dimension supported by the render target.
-The maximum size, in pixels, of any one bitmap dimension supported by the render target.
This method returns the maximum texture size of the Direct3D device.
Note??The software renderer and WARP devices return the value of 16 megapixels (16*1024*1024). You can create a Direct2D texture that is this size, but not a Direct3D texture that is this size.? -Indicates whether the render target supports the specified properties.
-The render target properties to test.
TRUE if the specified render target properties are supported by this render target; otherwise,
This method does not evaluate the DPI settings specified by the renderTargetProperties parameter.
-Ends drawing operations on the render target and indicates the current error state and associated tags.
-When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
When this method returns, contains the tag for drawing operations that caused errors or 0 if there were no errors. This parameter is passed uninitialized.
If the method succeeds, it returns
Drawing operations can only be issued between a BeginDraw and EndDraw call.
BeginDraw and EndDraw are use to indicate that a render target is in use by the Direct2D system. Different implementations of
The BeginDraw method must be called before rendering operations can be called, though state-setting and state-retrieval operations can be performed even outside of BeginDraw/EndDraw.
After BeginDraw is called, a render target will normally build up a batch of rendering commands, but defer processing of these commands until either an internal buffer is full, the Flush method is called, or until EndDraw is called. The EndDraw method causes any batched drawing operations to complete, and then returns an
If EndDraw is called without a matched call to BeginDraw, it returns an error indicating that BeginDraw must be called before EndDraw. Calling BeginDraw twice on a render target puts the target into an error state where nothing further is drawn, and returns an appropriate
Represents a Direct2D drawing resource.
-Retrieves the factory associated with this resource.
-Retrieves the factory associated with this resource.
-When this method returns, contains a reference to a reference to the factory that created this resource. This parameter is passed uninitialized.
Tracks a transform-created resource texture.
-Updates the specific resource texture inside the specific range or box using the supplied data.
-The "left" extent of the updates if specified; if
The "right" extent of the updates if specified; if
The stride to advance through the input data, according to dimension.
The number of dimensions in the resource texture. This must match the number used to load the texture.
The data to be placed into the resource texture.
The size of the data buffer to be used to update the resource texture.
The method returns an
Description | |
---|---|
No error occurred. | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
?
The number of dimensions in the update must match those of the created texture.
-Describes a rounded rectangle.
-Retrieves a rounded rectangle that describes this rounded rectangle geometry.
-Retrieves a rounded rectangle that describes this rounded rectangle geometry.
-A reference that receives a rounded rectangle that describes this rounded rectangle geometry. You must allocate storage for this parameter.
Describes a geometric path that does not contain quadratic bezier curves or arcs.
-A geometry sink consists of one or more figures. Each figure is made up of one or more line or Bezier curve segments. To create a figure, call the BeginFigure method and specify the figure's start point, then use AddLines and AddBeziers to add line and Bezier segments. When you are finished adding segments, call the EndFigure method. You can repeat this sequence to create additional figures. When you are finished creating figures, call the Close method.
To create geometry paths that can contain arcs and quadratic Bezier curves, use an
Describes a geometric path that does not contain quadratic bezier curves or arcs.
-A geometry sink consists of one or more figures. Each figure is made up of one or more line or Bezier curve segments. To create a figure, call the BeginFigure method and specify the figure's start point, then use AddLines and AddBeziers to add line and Bezier segments. When you are finished adding segments, call the EndFigure method. You can repeat this sequence to create additional figures. When you are finished creating figures, call the Close method.
To create geometry paths that can contain arcs and quadratic Bezier curves, use an
Specifies the method used to determine which points are inside the geometry described by this geometry sink and which points are outside.
-The method used to determine whether a given point is part of the geometry.
The fill mode defaults to
Specifies stroke and join options to be applied to new segments added to the geometry sink.
-Stroke and join options to be applied to new segments added to the geometry sink.
After this method is called, the specified segment flags are applied to each segment subsequently added to the sink. The segment flags are applied to every additional segment until this method is called again and a different set of segment flags is specified.
-Starts a new figure at the specified point.
-The point at which to begin the new figure.
Whether the new figure should be hollow or filled.
If this method is called while a figure is currently in progress, the interface is invalidated and all future methods will fail.
-Creates a sequence of lines using the specified points and adds them to the geometry sink.
-A reference to an array of one or more points that describe the lines to draw. A line is drawn from the geometry sink's current point (the end point of the last segment drawn or the location specified by BeginFigure) to the first point in the array. if the array contains additional points, a line is drawn from the first point to the second point in the array, from the second point to the third point, and so on.
The number of points in the points array.
Creates a sequence of cubic Bezier curves and adds them to the geometry sink.
-A reference to an array of Bezier segments that describes the Bezier curves to create. A curve is drawn from the geometry sink's current point (the end point of the last segment drawn or the location specified by BeginFigure) to the end point of the first Bezier segment in the array. if the array contains additional Bezier segments, each subsequent Bezier segment uses the end point of the preceding Bezier segment as its start point.
The number of Bezier segments in the beziers array.
Ends the current figure; optionally, closes it.
-A value that indicates whether the current figure is closed. If the figure is closed, a line is drawn between the current point and the start point specified by BeginFigure.
Calling this method without a matching call to BeginFigure places the geometry sink in an error state; subsequent calls are ignored, and the overall failure will be returned when the Close method is called.
-Closes the geometry sink, indicates whether it is in an error state, and resets the sink's error state.
-If this method succeeds, it returns
Do not close the geometry sink while a figure is still in progress; doing so puts the geometry sink in an error state. For the close operation to be successful, there must be one EndFigure call for each call to BeginFigure.
After calling this method, the geometry sink might not be usable. Direct2D implementations of this interface do not allow the geometry sink to be modified after it is closed, but other implementations might not impose this restriction.
-Paints an area with a solid color.
-Retrieves or sets the color of the solid color brush.
-Specifies the color of this solid color brush.
-The color of this solid color brush.
To help create colors, Direct2D provides the ColorF class. It offers several helper methods for creating colors and provides a set or predefined colors.
-Retrieves the color of the solid color brush.
-The color of this solid color brush.
Represents a CPU-based rasterization stage in the transform pipeline graph.
-Represents a CPU-based rasterization stage in the transform pipeline graph.
-Sets the render information for the transform.
-The interface supplied to the transform to allow specifying the CPU based transform pass.
If the method succeeds, it returns
Provides a render information interface to the source transform to allow it to specify state to the rendering system.
-Draws the transform to the graphics processing unit (GPU)?based Direct2D pipeline.
-The target to which the transform should be written.
The area within the source from which the image should be drawn.
The origin within the target bitmap to which the source data should be drawn.
If the method succeeds, it returns
The implementation of the rasterizer guarantees that adding the renderRect to the targetOrigin does not exceed the bounds of the bitmap.
When implementing this method you must update the bitmap in this way:
If you set the buffer precision manually on the associated
Adds the given sprites to the end of this sprite batch.
-In Direct2D, a sprite is defined by four properties: a destination rectangle, a source rectangle, a color, and a transform. Destination rectangles are mandatory, but the remaining properties are optional.
Note??Always omit or pass a null value for properties you do not wish to use. This allows Direct2D to avoid storing values for those properties and to skip their handling entirely, which improves drawing speed. For example, suppose you have a batch of 500 sprites, and you do not wish to transform any of their destination rectangles. Rather than passing an array of identity matrices, simply omit the transforms parameter. This allows Direct2D to avoid storing any transforms and will yield the fastest drawing performance. On the other hand, if any sprite in the batch has any value set for a property, then internally Direct2D must allocate space for that property array and assign every sprite a value for that property (even if it?s just the default value).? -Retrieves the number of sprites in this sprite batch.
-Adds the given sprites to the end of this sprite batch.
-The number of sprites to be added. This determines how many strides into each given array Direct2D will read.
A reference to an array containing the destination rectangles specifying where to draw the sprites on the destination device context.
A reference to an array containing the source rectangles specifying the regions of the source bitmap to draw as sprites. Direct2D will use the entire source bitmap for sprites that are assigned a null value or the InfiniteRectU. If this parameter is omitted entirely or set to a null value, then Direct2D will use the entire source bitmap for all the added sprites.
A reference to an array containing the colors to apply to each sprite. The output color is the result of component-wise multiplication of the source bitmap color and the provided color. The output color is not clamped.
Direct2D will not change the color of sprites that are assigned a null value. If this parameter is omitted entirely or set to a null value, then Direct2D will not change the color of any of the added sprites.
A reference to an array containing the transforms to apply to each sprite?s destination rectangle.
Direct2D will not transform the destination rectangle of any sprites that are assigned a null value. If this parameter is omitted entirely or set to a null value, then Direct2D will not transform the destination rectangle of any of the added sprites.
Specifies the distance, in bytes, between each rectangle in the destinationRectangles array. If you provide a stride of 0, then the same destination rectangle will be used for each added sprite.
Specifies the distance, in bytes, between each rectangle in the sourceRectangles array (if that array is given). If you provide a stride of 0, then the same source rectangle will be used for each added sprite.
Specifies the distance, in bytes, between each color in the colors array (if that array is given). If you provide a stride of 0, then the same color will be used for each added sprite.
Specifies the distance, in bytes, between each transform in the transforms array (if that array is given). If you provide a stride of 0, then the same transform will be used for each added sprite.
If this method succeeds, it returns
In Direct2D, a sprite is defined by four properties: a destination rectangle, a source rectangle, a color, and a transform. Destination rectangles are mandatory, but the remaining properties are optional.
Note??Always omit or pass a null value for properties you do not wish to use. This allows Direct2D to avoid storing values for those properties and to skip their handling entirely, which improves drawing speed. For example, suppose you have a batch of 500 sprites, and you do not wish to transform any of their destination rectangles. Rather than passing an array of identity matrices, simply omit the transforms parameter. This allows Direct2D to avoid storing any transforms and will yield the fastest drawing performance. On the other hand, if any sprite in the batch has any value set for a property, then internally Direct2D must allocate space for that property array and assign every sprite a value for that property (even if it?s just the default value).? -Updates the properties of the specified sprites in this sprite batch. Providing a null value for any property will leave that property unmodified for that sprite.
-The index of the first sprite in this sprite batch to update.
The number of sprites to update with new properties. This determines how many strides into each given array Direct2D will read.
A reference to an array containing the destination rectangles specifying where to draw the sprites on the destination device context.
A reference to an array containing the source rectangles specifying the regions of the source bitmap to draw as sprites.
Direct2D will use the entire source bitmap for sprites that are assigned a null value or the InfiniteRectU. If this parameter is omitted entirely or set to a null value, then Direct2D will use the entire source bitmap for all the updated sprites.
A reference to an array containing the colors to apply to each sprite. The output color is the result of component-wise multiplication of the source bitmap color and the provided color. The output color is not clamped.
Direct2D will not change the color of sprites that are assigned a null value. If this parameter is omitted entirely or set to a null value, then Direct2D will not change the color of any of the updated sprites.
A reference to an array containing the transforms to apply to each sprite?s destination rectangle.
Direct2D will not transform the destination rectangle of any sprites that are assigned a null value. If this parameter is omitted entirely or set to a null value, then Direct2D will not transform the destination rectangle of any of the updated sprites.
Specifies the distance, in bytes, between each rectangle in the destinationRectangles array. If you provide a stride of 0, then the same destination rectangle will be used for each updated sprite.
Specifies the distance, in bytes, between each rectangle in the sourceRectangles array (if that array is given). If you provide a stride of 0, then the same source rectangle will be used for each updated sprite.
Specifies the distance, in bytes, between each color in the colors array (if that array is given). If you provide a stride of 0, then the same color will be used for each updated sprite.
Specifies the distance, in bytes, between each transform in the transforms array (if that array is given). If you provide a stride of 0, then the same transform will be used for each updated sprite.
Returns
Retrieves the specified subset of sprites from this sprite batch. For the best performance, use nullptr for properties that you do not need to retrieve.
-The index of the first sprite in this sprite batch to retrieve.
The number of sprites to retrieve.
When this method returns, contains a reference to an array containing the destination rectangles for the retrieved sprites.
When this method returns, contains a reference to an array containing the source rectangles for the retrieved sprites.
The InfiniteRectU is returned for any sprites that were not assigned a source rectangle.
When this method returns, contains a reference to an array containing the colors to be applied to the retrieved sprites.
The color {1.0f, 1.0f, 1.0f, 1.0f} is returned for any sprites that were not assigned a color.
When this method returns, contains a reference to an array containing the transforms to be applied to the retrieved sprites.
The identity matrix is returned for any sprites that were not assigned a transform.
If this method succeeds, it returns
Retrieves the number of sprites in this sprite batch.
-Returns the number of sprites in this sprite batch
Removes all sprites from this sprite batch.
-Describes the caps, miter limit, line join, and dash information for a stroke.
-Retrieves the type of shape used at the beginning of a stroke.
-Retrieves the type of shape used at the end of a stroke.
-Gets a value that specifies how the ends of each dash are drawn.
-Retrieves the limit on the ratio of the miter length to half the stroke's thickness.
-Retrieves the type of joint used at the vertices of a shape's outline.
-Retrieves a value that specifies how far in the dash sequence the stroke will start.
-Gets a value that describes the stroke's dash pattern.
-If a custom dash style is specified, the dash pattern is described by the dashes array, which can be retrieved by calling the GetDashes method.
-Retrieves the number of entries in the dashes array.
-Retrieves the type of shape used at the beginning of a stroke.
-The type of shape used at the beginning of a stroke.
Retrieves the type of shape used at the end of a stroke.
-The type of shape used at the end of a stroke.
Gets a value that specifies how the ends of each dash are drawn.
-A value that specifies how the ends of each dash are drawn.
Retrieves the limit on the ratio of the miter length to half the stroke's thickness.
-A positive number greater than or equal to 1.0f that describes the limit on the ratio of the miter length to half the stroke's thickness.
Retrieves the type of joint used at the vertices of a shape's outline.
-A value that specifies the type of joint used at the vertices of a shape's outline.
Retrieves a value that specifies how far in the dash sequence the stroke will start.
-A value that specifies how far in the dash sequence the stroke will start.
Gets a value that describes the stroke's dash pattern.
-A value that describes the predefined dash pattern used, or
If a custom dash style is specified, the dash pattern is described by the dashes array, which can be retrieved by calling the GetDashes method.
-Retrieves the number of entries in the dashes array.
-The number of entries in the dashes array if the stroke is dashed; otherwise, 0.
Copies the dash pattern to the specified array.
-A reference to an array that will receive the dash pattern. The array must be able to contain at least as many elements as specified by dashesCount. You must allocate storage for this array.
The number of dashes to copy. If this value is less than the number of dashes in the stroke style's dashes array, the returned dashes are truncated to dashesCount. If this value is greater than the number of dashes in the stroke style's dashes array, the extra dashes are set to 0.0f. To obtain the actual number of dashes in the stroke style's dashes array, use the GetDashesCount method.
The dashes are specified in units that are a multiple of the stroke width, with subsequent members of the array indicating the dashes and gaps between dashes: the first entry indicates a filled dash, the second a gap, and so on.
-Describes the caps, miter limit, line join, and dash information for a stroke.
-This interface adds functionality to
Gets the stroke transform type.
-Gets the stroke transform type.
-This method returns the stroke transform type.
This interface performs all the same functions as the
This interface performs all the same functions as the
Interface for all SVG elements.
-This object supplies the values for context-fill, context-stroke, and context-value that are used when rendering SVG glyphs.
-Returns or sets the requested fill parameters.
-Returns the number of dashes in the dash array.
-Provides values to an SVG glyph for fill.
-Describes how the area is painted. A null brush will cause the context-fill value to come from the defaultFillBrush. If the defaultFillBrush is also null, the context-fill value will be 'none'. To set the ?context-fill? value, this method uses the provided brush with its opacity set to 1. To set the ?context-fill-opacity? value, this method uses the opacity of the provided brush.
This method returns an
Returns the requested fill parameters.
-Describes how the area is painted.
Provides values to an SVG glyph for stroke properties. The brush with opacity set to 1 is used as the 'context-stroke'. The opacity of the brush is used as the 'context-stroke-opacity' value.
-Describes how the stroke is painted. A null brush will cause the context-stroke value to be none.
Specifies the 'context-value' for the 'stroke-width' property.
Specifies the 'context-value' for the 'stroke-dasharray' property. A null value will cause the stroke-dasharray to be set to 'none'.
The the number of dashes in the dash array.
Specifies the 'context-value' for the 'stroke-dashoffset' property.
This method returns an
Returns the number of dashes in the dash array.
-Returns the number of dashes in the dash array.
Returns the requested stroke parameters. Any parameters that are non-null will receive the value of the requested parameter.
-Describes how the stroke is painted.
The 'context-value' for the 'stroke-width' property.
The 'context-value' for the 'stroke-dasharray' property.
The the number of dashes in the dash array.
The 'context-value' for the 'stroke-dashoffset' property.
Represents a bitmap that has been bound to an
This interface performs all the same functions as the
Interface describing an SVG points value in a polyline or polygon element.
-This interface performs all the same functions as the
Populates an
Populates an
Copies the specified triangles to the sink.
-An array of
The number of triangles to copy from the triangles array.
Closes the sink and returns its error status.
-If this method succeeds, it returns
Represents the base interface for all of the transforms implemented by the transform author.
-Transforms are aggregated by effect authors. This interface provides a common interface for implementing the Shantzis rectangle calculations which is the basis for all the transform processing in Direct2D imaging extensions. These calculations are described in the paper A model for efficient and flexible image computing.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Allows a transform to state how it would map a rectangle requested on its output to a set of sample rectangles on its input.
-The output rectangle to which the inputs must be mapped.
The corresponding set of inputs. The inputs will directly correspond to the transform inputs.
The transform implementation must ensure that any pixel shader or software callback implementation it provides honors this calculation.
The transform implementation must regard this method as purely functional. It can base the mapped input and output rectangles on its current state as specified by the encapsulating effect properties. However, it must not change its own state in response to this method being invoked. The DirectImage renderer implementation reserves the right to call this method at any time and in any sequence.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Performs the inverse mapping to MapOutputRectToInputRects.
-The transform implementation must ensure that any pixel shader or software callback implementation it provides honors this calculation.
The transform implementation must regard this method as purely functional. It can base the mapped input and output rectangles on its current state as specified by the encapsulating effect properties. However, it must not change its own state in response to this method being invoked. The Direct2D renderer implementation reserves the right to call this method at any time and in any sequence.
-Represents a geometry that has been transformed.
-Using an
Retrieves the source geometry of this transformed geometry object.
-Retrieves the matrix used to transform the
Retrieves the source geometry of this transformed geometry object.
-When this method returns, contains a reference to a reference to the source geometry for this transformed geometry object. This parameter is passed uninitialized.
Retrieves the matrix used to transform the
Represents an image source which shares resources with an original image source.
-Retrieves the source image used to create the transformed image source. This value corresponds to the value passed to CreateTransformedImageSource.
-Retrieves the properties specified when the transformed image source was created. This value corresponds to the value passed to CreateTransformedImageSource.
-Retrieves the source image used to create the transformed image source. This value corresponds to the value passed to CreateTransformedImageSource.
-Retrieves the properties specified when the transformed image source was created. This value corresponds to the value passed to CreateTransformedImageSource.
-Represents a graph of transform nodes.
-This interface allows a graph of transform nodes to be specified. This interface is passed to
Returns the number of inputs to the transform graph.
-Returns the number of inputs to the transform graph.
-The number of inputs to this transform graph.
Sets a single transform node as being equivalent to the whole graph.
-The node to be set.
The method returns an
Description | |
---|---|
No error occurred | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
?
This equivalent to calling
Adds the provided node to the transform graph.
-The node that will be added to the transform graph.
The method returns an
Description | |
---|---|
No error occurred | |
E_OUTOFMEMORY | Direct2D could not allocate sufficient memory to complete the call. |
?
This adds a transform node to the transform graph. A node must be added to the transform graph before it can be interconnected in any way. -
A transform graph cannot be directly added to another transform graph.
- Only interfaces derived from
Removes the provided node from the transform graph.
-The node that will be removed from the transform graph.
The method returns an
Description | |
---|---|
No error occurred | |
D2DERR_NOT_FOUND = (HRESULT_FROM_WIN32( | Direct2D could not locate the specified node. |
?
The node must already exist in the graph; otherwise, the call fails with D2DERR_NOT_FOUND.
Any connections to this node will be removed when the node is removed.
After the node is removed, it cannot be used by the interface until it has been added to the graph by AddNode.
-Sets the output node for the transform graph.
-The node that will be considered the output of the transform node.
The method returns an
Description | |
---|---|
No error occurred | |
D2DERR_NOT_FOUND = (HRESULT_FROM_WIN32( | Direct2D could not locate the specified node. |
?
The node must already exist in the graph; otherwise, the call fails with D2DERR_NOT_FOUND.
-Connects two nodes inside the transform graph.
-The node from which the connection will be made.
The node to which the connection will be made.
The node input that will be connected.
The method returns an
Description | |
---|---|
No error occurred | |
D2DERR_NOT_FOUND = (HRESULT_FROM_WIN32( | Direct2D could not locate the specified node. |
?
Both nodes must already exist in the graph; otherwise, the call fails with D2DERR_NOT_FOUND.
-Connects a transform node inside the graph to the corresponding effect input of the encapsulating effect.
-The effect input to which the transform node will be bound.
The node to which the connection will be made.
The node input that will be connected.
The method returns an
Description | |
---|---|
No error occurred | |
D2DERR_NOT_FOUND = (HRESULT_FROM_WIN32( | Direct2D could not locate the specified node. |
?
Clears the transform nodes and all connections from the transform graph.
-Used when enough changes to transfoms would make editing of the transform graph inefficient.
-Uses the specified input as the effect output.
-The index of the input to the effect.
The method returns an
Description | |
---|---|
No error occurred | |
D2DERR_NOT_FOUND = (HRESULT_FROM_WIN32( | Direct2D could not locate the specified node. |
?
Represents the base interface for all of the transforms implemented by the transform author.
-Transforms are aggregated by effect authors. This interface provides a common interface for implementing the Shantzis rectangle calculations which is the basis for all the transform processing in Direct2D imaging extensions. These calculations are described in the paper A model for efficient and flexible image computing.
-Allows a transform to state how it would map a rectangle requested on its output to a set of sample rectangles on its input.
-The output rectangle from which the inputs must be mapped.
The corresponding set of inputs. The inputs will directly correspond to the transform inputs.
The number of inputs specified. Direct2D guarantees that this is equal to the number of inputs specified on the transform.
If the method succeeds, it returns
The transform implementation must ensure that any pixel shader or software callback implementation it provides honors this calculation.
The transform implementation must regard this method as purely functional. It can base the mapped input and output rectangles on its current state as specified by the encapsulating effect properties. However, it must not change its own state in response to this method being invoked. The Direct2D renderer implementation reserves the right to call this method at any time and in any sequence.
-Performs the inverse mapping to MapOutputRectToInputRects.
-The transform implementation must ensure that any pixel shader or software callback implementation it provides honors this calculation.
Unlike the MapOutputRectToInputRects and MapInvalidRect functions, this method is explicitly called by the renderer at a determined place in its rendering algorithm. The transform implementation may change its state based on the input rectangles and use this information to control its rendering information. This method is always called before the MapInvalidRect and MapOutputRectToInputRects methods of the transform.
-Sets the input rectangles for this rendering pass into the transform.
-The index of the input rectangle.
The invalid input rectangle.
The output rectangle to which the input rectangle must be mapped.
The transform implementation must regard MapInvalidRect as purely functional. The transform implementation can base the mapped input rectangle on the transform implementation's current state as specified by the encapsulating effect properties. But the transform implementation can't change its own state in response to a call to MapInvalidRect. Direct2D can call this method at any time and in any sequence following a call to the MapInputRectsToOutputRect method. -
-Describes a node in a transform topology.
-Transform nodes are type-less and only define the notion of an object that accepts a number of inputs and is an output. This interface limits a topology to single output nodes.
-Describes a node in a transform topology.
-Transform nodes are type-less and only define the notion of an object that accepts a number of inputs and is an output. This interface limits a topology to single output nodes.
-Gets the number of inputs to the transform node.
-This method returns the number of inputs to this transform node.
Defines a mappable single-dimensional vertex buffer.
-Maps the provided data into user memory.
-When this method returns, contains the address of a reference to the available buffer.
The desired size of the buffer.
The method returns an
Description | |
---|---|
No error occurred. | |
E_INVALIDARG | An invalid parameter was passed to the returning function. |
D3DERR_DEVICELOST | The device has been lost but cannot be reset at this time. |
?
If data is larger than bufferSize, this method fails.
-Unmaps the vertex buffer.
-The method returns an
Description | |
---|---|
No error occurred. | |
The object was not in the correct state to process the method. |
?
After this method returns, the mapped memory from the vertex buffer is no longer accessible by the effect.
-Renders drawing instructions to a window.
-As is the case with other render targets, you must call BeginDraw before issuing drawing commands. After you've finished drawing, call EndDraw to indicate that drawing is finished and to release access to the buffer backing the render target. For
A hardware render target's back-buffer is the size specified by GetPixelSize. If EndDraw presents the buffer, this bitmap is stretched to cover the surface where it is presented: the entire client area of the window. This stretch is performed using bilinear filtering if the render target is rendering in hardware and using nearest-neighbor filtering if the rendering target is using software. (Typically, an application will call Resize to ensure the pixel size of the render target and the pixel size of the destination match, and no scaling is necessary, though this is not a requirement.)
In the case where a window straddles adapters, Direct2D ensures that the portion of the off-screen render target is copied from the adapter where rendering is occurring to the adapter that needs to display the contents. If the adapter a render target is on has been removed or the driver upgraded while the application is running, this is returned as an error in the EndDraw call. In this case, the application should create a new render target and resources as necessary. -
- Returns the
Indicates whether the
A value that indicates whether the
After this method is called, the contents of the render target's back-buffer are not defined, even if the
Returns the
The
Describes an elliptical arc between two points.
-The end point of the arc.
The x-radius and y-radius of the arc.
A value that specifies how many degrees in the clockwise direction the ellipse is rotated relative to the current coordinate system.
A value that specifies whether the arc sweep is clockwise or counterclockwise.
A value that specifies whether the given arc is larger than 180 degrees.
Represents a cubic bezier segment drawn between two points.
-A cubic Bezier curve is defined by four points: a start point, an end point (point3), and two control points (point1 and point2). A Bezier segment does not contain a property for the starting point of the curve; it defines only the end point. The beginning point of the curve is the current point of the path to which the Bezier curve is added.
The two control points of a cubic Bezier curve behave like magnets, attracting portions of what would otherwise be a straight line toward themselves and producing a curve. The first control point, point1, affects the beginning portion of the curve; the second control point, point2, affects the ending portion of the curve.
Note??The curve doesn't necessarily pass through either of the control points; each control point moves its portion of the line toward itself, but not through itself.? -The first control point for the Bezier segment.
The second control point for the Bezier segment.
The end point for the Bezier segment.
Describes the extend modes and the interpolation mode of an
Describes the extend modes and the interpolation mode of an
Defines a blend description to be used in a particular blend transform.
-This description closely matches the
Specifies the first RGB data source and includes an optional preblend operation.
Specifies the second RGB data source and includes an optional preblend operation.
Specifies how to combine the RGB data sources.
Specifies the first alpha data source and includes an optional preblend operation. Blend options that end in _COLOR are not allowed.
Specifies the second alpha data source and includes an optional preblend operation. Blend options that end in _COLOR are not allowed.
Specifies how to combine the alpha data sources.
Parameters to the blend operations. The blend must use
Describes the opacity and transformation of a brush.
-This structure is used when creating a brush. For convenience, Direct2D provides the D2D1::BrushProperties function for creating
After creating a brush, you can change its opacity or transform by calling the SetOpacity or SetTransform methods.
-A value between 0.0f and 1.0f, inclusive, that specifies the degree of opacity of the brush.
The transformation that is applied to the brush.
Specifies the options with which the Direct2D device, factory, and device context are created. -
-The root objects referred to here are the Direct2D device, Direct2D factory and the Direct2D device context. -
-Describes the drawing state of a render target.
-The antialiasing mode for subsequent nontext drawing operations.
The antialiasing mode for subsequent text and glyph drawing operations.
A label for subsequent drawing operations.
A label for subsequent drawing operations.
The transformation to apply to subsequent drawing operations.
Describes the drawing state of a device context.
-The antialiasing mode for subsequent nontext drawing operations.
The antialiasing mode for subsequent text and glyph drawing operations.
A label for subsequent drawing operations.
A label for subsequent drawing operations.
The transformation to apply to subsequent drawing operations.
The blend mode for the device context to apply to subsequent drawing operations.
Contains the debugging level of an
To enable debugging, you must install the Direct2D Debug Layer.
-Describes compute shader support, which is an option on D3D10 feature level.
-You can fill this structure by passing a D2D1_ FEATURE_DATA_D3D10_X_HARDWARE_OPTIONS structure to
Shader model 4 compute shaders are supported.
Describes the support for doubles in shaders.
-Fill this structure by passing a
TRUE is doubles are supported within the shaders.
Represents a tensor patch with 16 control points, 4 corner colors, and boundary flags. An
The following image shows the numbering of control points on a tensor grid.
-Contains the position and color of a gradient stop.
-Gradient stops can be specified in any order if they are at different positions. Two stops may share a position. In this case, the first stop specified is treated as the "low" stop (nearer 0.0f) and subsequent stops are treated as "higher" (nearer 1.0f). This behavior is useful if a caller wants an instant transition in the middle of a stop.
Typically, there are at least two points in a collection, although creation with only one stop is permitted. For example, one point is at position 0.0f, another point is at position 1.0f, and additional points are distributed in the [0, 1] range. Where the gradient progression is beyond the range of [0, 1], the stops are stored, but may affect the gradient.
When drawn, the [0, 1] range of positions is mapped to the brush, in a brush-dependent way. For details, see
Gradient stops with a position outside the [0, 1] range cannot be seen explicitly, but they can still affect the colors produced in the [0, 1] range. For example, a two-stop gradient 0.0f, Black}, {2.0f, White is indistinguishable visually from 0.0f, Black}, {1.0f, Mid-level gray. Also, the colors are clamped before interpolation.
-A value that indicates the relative position of the gradient stop in the brush. This value must be in the [0.0f, 1.0f] range if the gradient stop is to be seen explicitly.
The color of the gradient stop.
Contains the
Use this structure when you call the CreateHwndRenderTarget method to create a new
For convenience, Direct2D provides the D2D1::HwndRenderTargetProperties function for creating new
Describes image brush features.
-The source rectangle in the image space from which the image will be tiled or interpolated.
The extend mode in the image x-axis.
The extend mode in the image y-axis.
The interpolation mode to use when scaling the image brush.
Represents a Bezier segment to be used in the creation of an
Represents a point, radius pair that makes up part of a
Defines the general pen tip shape and the transform used in an
Describes the options that transforms may set on input textures.
-The type of filter to apply to the input texture.
The mip level to retrieve from the upstream transform, if specified.
A description of a single element to the vertex layout.
-This structure is a subset of
If the D2D1_APPEND_ALIGNED_ELEMENT constant is used for alignedByteOffset, the elements will be packed contiguously for convenience. -
-The HLSL semantic associated with this element in a shader input-signature.
The semantic index for the element. A semantic index modifies a semantic, with an integer index number. A semantic index is only needed in a case where there is more than one element with the same semantic. For example, a 4x4 matrix would have four components each with the semantic name matrix; however, each of the four components would have different semantic indices (0, 1, 2, and 3).
The data type of the element data.
An integer value that identifies the input-assembler. Valid values are between 0 and 15.
The offset in bytes between each element.
Contains the content bounds, mask information, opacity settings, and other options for a layer resource.
-The content bounds of the layer. Content outside these bounds is not guaranteed to render.
The geometric mask specifies the area of the layer that is composited into the render target.
A value that specifies the antialiasing mode for the geometricMask.
A value that specifies the transform that is applied to the geometric mask when composing the layer.
An opacity value that is applied uniformly to all resources in the layer when compositing to the target.
A brush that is used to modify the opacity of the layer. The brush - is mapped to the layer, and the alpha channel of each mapped brush pixel is multiplied against the corresponding layer pixel.
A value that specifies whether the layer intends to render text with ClearType antialiasing.
Contains the content bounds, mask information, opacity settings, and other options for a layer resource.
-The content bounds of the layer. Content outside these bounds is not guaranteed to render.
The geometric mask specifies the area of the layer that is composited into the render target.
A value that specifies the antialiasing mode for the geometricMask.
A value that specifies the transform that is applied to the geometric mask when composing the layer.
An opacity value that is applied uniformly to all resources in the layer when compositing to the target.
A brush that is used to modify the opacity of the layer. The brush - is mapped to the layer, and the alpha channel of each mapped brush pixel is multiplied against the corresponding layer pixel.
Additional options for the layer creation.
Contains the starting point and endpoint of the gradient axis for an
Use this method when creating new
The following illustration shows how a linear gradient changes as you change its start and end points. For the first gradient, the start point is set to (0,0) and the end point to (150, 50); this creates a diagonal gradient that starts at the upper-left corner and extends to the lower-right corner of the area being painted. When you set the start point to (0, 25) and the end point to (150, 25), a horizontal gradient is created. Similarly, setting the start point to (75, 0) and the end point to (75, 50) creates a vertical gradient. Setting the start point to (0, 50) and the end point to (150, 0) creates a diagonal gradient that starts at the lower-left corner and extends to the upper-right corner of the area being painted.
- Describes mapped memory from the
The mapped rectangle is used to map a rectangle into the caller's address space.
-Contains the data format and alpha mode for a bitmap or render target.
-For more information about the pixel formats and alpha modes supported by each render target, see Supported Pixel Formats and Alpha Modes.
-A value that specifies the size and arrangement of channels in each pixel.
A value that specifies whether the alpha channel is using pre-multiplied alpha, straight alpha, whether it should be ignored and considered opaque, or whether it is unkown.
Describes a point on a path geometry.
-The end point after walking the path.
A unit vector indicating the tangent point.
The index of the segment on which point resides. This index is global to the entire path, not just to a particular figure.
The index of the figure on which point resides.
The length of the section of the path stretching from the start of the path to the start of endSegment.
The creation properties for a
Defines a property binding to a pair of functions which get and set the corresponding property.
-The propertyName is used to cross-correlate the property binding with the registration XML. The propertyName must be present in the XML call or the registration will fail. All properties must be bound.
-The name of the property.
The function that will receive the data to set.
The function that will be asked to write the output data.
Contains the control point and end point for a quadratic Bezier segment.
-The control point of the quadratic Bezier segment.
The end point of the quadratic Bezier segment.
Contains the gradient origin offset and the size and position of the gradient ellipse for an
Different values for center, gradientOriginOffset, radiusX and/or radiusY produce different gradients. The following illustration shows several radial gradients that have different gradient origin offsets, creating the appearance of the light illuminating the circles from different angles.
For convenience, Direct2D provides the D2D1::RadialGradientBrushProperties function for creating new D2D1_RADIAL_GRADIENT_BRUSH structures.
-Describes limitations to be applied to an imaging effect renderer.
-The renderer can allocate tiles larger than the minimum tile allocation. The allocated tiles will be powers of two of the minimum size on each axis, except that the size on each axis will not exceed the guaranteed maximum texture size for the device feature level.
The minimumPixelRenderExtent is the size of the square tile below which the renderer will expand the tile allocation rather than attempting to subdivide the rendering tile any further. When this threshold is reached, the allocation tile size is expanded. This might occur repeatedly until rendering can either proceed or it is determined that the graph cannot be rendered.
The buffer precision is used for intermediate buffers if it is otherwise unspecified by the effects or the internal effect topology. The application can also use the Output.BufferPrecision method to specify the output precision for a particular effect. This takes precedence over the context precision. In addition, the effect might set a different precision internally if required. If the buffer type on the context is
The buffer precision used by default if the buffer precision is not otherwise specified by the effect or the transform.
The tile allocation size to be used by the imaging effect renderer.
Contains rendering options (hardware or software), pixel format, DPI information, remoting options, and Direct3D support requirements for a render target.
-Use this structure when creating a render target, or use it with the
As a convenience, Direct2D provides the D2D1::RenderTargetProperties helper function for creating
Not all render targets support hardware rendering. For a list, see the Render Targets Overview.
-A value that specifies whether the render target should force hardware or software rendering. A value of
The pixel format and alpha mode of the render target. You can use the D2D1::PixelFormat function to create a pixel format that specifies that Direct2D should select the pixel format and alpha mode for you. For a list of pixel formats and alpha modes supported by each render target, see Supported Pixel Formats and Alpha Modes.
The horizontal DPI of the render target. To use the default DPI, set dpiX and dpiY to 0. For more information, see the Remarks section.
The vertical DPI of the render target. To use the default DPI, set dpiX and dpiY to 0. For more information, see the Remarks section.
A value that specifies how the render target is remoted and whether it should be GDI-compatible. Set to
A value that specifies the minimum Direct3D feature level required for hardware rendering. If the specified minimum level is not available, the render target uses software rendering if the type member is set to
Defines a resource texture when the original resource texture is created.
-The extents of the resource table in each dimension.
The number of dimensions in the resource texture. This must be a number from 1 to 3.
The precision of the resource texture to create.
The number of channels in the resource texture.
The filtering mode to use on the texture.
Specifies how pixel values beyond the extent of the texture will be sampled, in every dimension.
Contains the dimensions and corner radii of a rounded rectangle.
-Each corner of the rectangle specified by the rect is replaced with a quarter ellipse, with a radius in each direction specified by radiusX and radiusY.
If the radiusX is greater than or equal to half the width of the rectangle, and the radiusY is greater than or equal to one-half the height, the rounded rectangle is an ellipse with the same width and height of the rect.
Even when both radiuX and radiusY are zero, the rounded rectangle is different from a rectangle., When stroked, the corners of the rounded rectangle are roundly joined, not mitered (square).
-The coordinates of the rectangle.
The x-radius for the quarter ellipse that is drawn to replace every corner of the rectangle.
The y-radius for the quarter ellipse that is drawn to replace every corner of the rectangle.
Creates a color context from a simple color profile. It is only valid to use this with the Color Management Effect in 'Best' mode.
-The simple color profile to create the color context from.
The created color context.
Describes the stroke that outlines a shape.
-The following illustration shows different dashOffset values for the same custom dash style.
-The cap applied to the start of all the open figures in a stroked geometry.
The cap applied to the end of all the open figures in a stroked geometry.
The shape at either end of each dash segment.
A value that describes how segments are joined. This value is ignored for a vertex if the segment flags specify that the segment should have a smooth join.
The limit of the thickness of the join on a mitered corner. This value is always treated as though it is greater than or equal to 1.0f.
A value that specifies whether the stroke has a dash pattern and, if so, the dash style.
A value that specifies an offset in the dash sequence. A positive dash offset value shifts the dash pattern, in units of stroke width, toward the start of the stroked geometry. A negative dash offset value shifts the dash pattern, in units of stroke width, toward the end of the stroked geometry.
Describes the stroke that outlines a shape.
-The cap to use at the start of each open figure.
The cap to use at the end of each open figure.
The cap to use at the start and end of each dash.
The line join to use.
The limit beyond which miters are either clamped or converted to bevels.
The type of dash to use.
The location of the first dash, relative to the start of the figure.
The rule that determines what render target properties affect the nib of the stroke.
A 3D vector that consists of three single-precision floating-point values (x, y, z).
-The x value of the vector.
The y value of the vector.
A description of a single element to the vertex layout.
-This structure is a subset of
If the D2D1_APPEND_ALIGNED_ELEMENT constant is used for alignedByteOffset, the elements will be packed contiguously for convenience. -
-The HLSL semantic associated with this element in a shader input-signature.
The semantic index for the element. A semantic index modifies a semantic, with an integer index number. A semantic index is only needed in a case where there is more than one element with the same semantic. For example, a 4x4 matrix would have four components each with the semantic name matrix; however, each of the four components would have different semantic indices (0, 1, 2, and 3).
The data type of the element data.
A 3D vector that consists of three single-precision floating-point values (x, y, z).
-The x value of the vector.
The y value of the vector.
The z value of the vector.
Properties of a transformed image source.
-The orientation at which the image source is drawn.
The horizontal scale factor at which the image source is drawn.
The vertical scale factor at which the image source is drawn.
The interpolation mode used when the image source is drawn. This is ignored if the image source is drawn using the DrawImage method, or using an image brush.
Image sourc option flags.
Contains the three vertices that describe a triangle.
-The first vertex of a triangle.
The second vertex of a triangle.
The third vertex of a triangle.
Defines the properties of a vertex buffer that are standard for all vertex shader definitions.
-If usage is dynamic, the system might return a system memory buffer and copy these vertices into the rendering vertex buffer for each element.
If the initialization data is not specified, the buffer will be uninitialized.
-The number of inputs to the vertex shader.
Indicates how frequently the vertex buffer is likely to be updated.
The initial contents of the vertex buffer.
The size of the vertex buffer, in bytes.
Defines a range of vertices that are used when rendering less than the full contents of a vertex buffer.
-The first vertex in the range to process.
The number of vertices to use.
Encapsulates a 32-bit device independent bitmap and device context, which can be used for rendering glyphs.
-You create an
if (SUCCEEDED(hr)) - { hr = g_pGdiInterop->CreateBitmapRenderTarget(hdc, r.right, r.bottom, &g_pBitmapRenderTarget); - } -
Draws a run of glyphs to a bitmap target at the specified position.
-The horizontal position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
The vertical position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
The measuring method for glyphs in the run, used with the other properties to determine the rendering mode.
The structure containing the properties of the glyph run.
The object that controls rendering behavior.
The foreground color of the text.
The optional rectangle that receives the bounding box (in pixels not DIPs) of all the pixels affected by drawing the glyph run. The black box rectangle may extend beyond the dimensions of the bitmap.
If this method succeeds, it returns
You can use the
STDMETHODIMP GdiTextRenderer::DrawGlyphRun( __maybenull void* clientDrawingContext, FLOAT baselineOriginX, FLOAT baselineOriginY,measuringMode, __in const* glyphRun, __in const* glyphRunDescription, * clientDrawingEffect ) - { hr = ; // Pass on the drawing call to the render target to do the real work. dirtyRect = {0}; hr = pRenderTarget_->DrawGlyphRun( baselineOriginX, baselineOriginY, measuringMode, glyphRun, pRenderingParams_, RGB(0,200,255), &dirtyRect ); return hr; - } -
The baselineOriginX, baslineOriginY, measuringMethod, and glyphRun parameters are provided (as arguments) when the callback method is invoked. The renderingParams, textColor and blackBoxRect are not.
Default rendering params can be retrieved by using the
STDMETHODIMP GdiTextRenderer::DrawGlyphRun( __maybenull void* clientDrawingContext, FLOAT baselineOriginX, FLOAT baselineOriginY, DWRITE_MEASURING_MODE measuringMode, __in DWRITE_GLYPH_RUN const* glyphRun, __in DWRITE_GLYPH_RUN_DESCRIPTION const* glyphRunDescription, IUnknown* clientDrawingEffect )
- { HRESULT hr = S_OK; // Pass on the drawing call to the render target to do the real work. RECT dirtyRect = {0}; hr = pRenderTarget_->DrawGlyphRun( baselineOriginX, baselineOriginY, measuringMode, glyphRun, pRenderingParams_, RGB(0,200,255), &dirtyRect ); return hr;
- }
-
- The baselineOriginX, baslineOriginY, measuringMethod, and glyphRun parameters are provided (as arguments) when the callback method is invoked. The renderingParams, textColor and blackBoxRect are not. Default rendering params can be retrieved by using the Gets a handle to the memory device context.
- An application can use the device context to draw using GDI functions. An application can obtain the bitmap handle (
Note that this method takes no parameters and returns an
memoryHdc = g_pBitmapRenderTarget->GetMemoryDC(); -
The
Gets or sets the number of bitmap pixels per DIP.
-A DIP (device-independent pixel) is 1/96 inch. Therefore, this value is the number if pixels per inch divided by 96.
-Gets or sets the transform that maps abstract coordinates to DIPs. By default this is the identity transform. Note that this is unrelated to the world transform of the underlying device context.
-Gets the dimensions of the target bitmap.
-Draws a run of glyphs to a bitmap target at the specified position.
-The horizontal position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
The vertical position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
The measuring method for glyphs in the run, used with the other properties to determine the rendering mode.
The structure containing the properties of the glyph run.
The object that controls rendering behavior.
The foreground color of the text.
The optional rectangle that receives the bounding box (in pixels not DIPs) of all the pixels affected by drawing the glyph run. The black box rectangle may extend beyond the dimensions of the bitmap.
If this method succeeds, it returns
You can use the
STDMETHODIMP GdiTextRenderer::DrawGlyphRun( __maybenull void* clientDrawingContext, FLOAT baselineOriginX, FLOAT baselineOriginY,measuringMode, __in const* glyphRun, __in const* glyphRunDescription, * clientDrawingEffect ) - { hr = ; // Pass on the drawing call to the render target to do the real work. dirtyRect = {0}; hr = pRenderTarget_->DrawGlyphRun( baselineOriginX, baselineOriginY, measuringMode, glyphRun, pRenderingParams_, RGB(0,200,255), &dirtyRect ); return hr; - } -
The baselineOriginX, baslineOriginY, measuringMethod, and glyphRun parameters are provided (as arguments) when the callback method is invoked. The renderingParams, textColor and blackBoxRect are not.
Default rendering params can be retrieved by using the
Gets a handle to the memory device context.
-Returns a device context handle to the memory device context.
An application can use the device context to draw using GDI functions. An application can obtain the bitmap handle (
Note that this method takes no parameters and returns an
memoryHdc = g_pBitmapRenderTarget->GetMemoryDC(); -
The
Gets the number of bitmap pixels per DIP.
-The number of bitmap pixels per DIP.
A DIP (device-independent pixel) is 1/96 inch. Therefore, this value is the number if pixels per inch divided by 96.
-Sets the number of bitmap pixels per DIP (device-independent pixel). A DIP is 1/96 inch, so this value is the number if pixels per inch divided by 96.
-A value that specifies the number of pixels per DIP.
If this method succeeds, it returns
Gets the transform that maps abstract coordinates to DIPs. By default this is the identity transform. Note that this is unrelated to the world transform of the underlying device context.
-When this method returns, contains a transform matrix.
If this method succeeds, it returns
Sets the transform that maps abstract coordinate to DIPs (device-independent pixel). This does not affect the world transform of the underlying device context.
- Specifies the new transform. This parameter can be
If this method succeeds, it returns
Gets the dimensions of the target bitmap.
-Returns the width and height of the bitmap in pixels.
If this method succeeds, it returns
Resizes the bitmap.
-The new bitmap width, in pixels.
The new bitmap height, in pixels.
If this method succeeds, it returns
Used to create all subsequent DirectWrite objects. This interface is the root factory interface for all DirectWrite objects.
- Create an
if (SUCCEEDED(hr)) - { hr =( , __uuidof( ), reinterpret_cast< **>(&pDWriteFactory_) ); - }
An
Creates an object that is used for interoperability with GDI.
-Gets an object which represents the set of installed fonts.
-If this parameter is nonzero, the function performs an immediate check for changes to the set of installed fonts. If this parameter is
When this method returns, contains the address of a reference to the system font collection object, or
Creates a font collection using a custom font collection loader.
-An application-defined font collection loader, which must have been previously registered using RegisterFontCollectionLoader.
The key used by the loader to identify a collection of font files. The buffer allocated for this key should at least be the size of collectionKeySize.
The size, in bytes, of the collection key.
Contains an address of a reference to the system font collection object if the method succeeds, or
If this method succeeds, it returns
Registers a custom font collection loader with the factory object.
-Pointer to a
If this method succeeds, it returns
This function registers a font collection loader with DirectWrite. The font collection loader interface, which should be implemented by a singleton object, handles enumerating font files in a font collection given a particular type of key. A given instance can only be registered once. Succeeding attempts will return an error, indicating that it has already been registered. Note that font file loader implementations must not register themselves with DirectWrite inside their constructors, and must not unregister themselves inside their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration with DirectWrite of font file loaders should be performed outside of the font file loader implementation.
-Unregisters a custom font collection loader that was previously registered using RegisterFontCollectionLoader.
-If this method succeeds, it returns
Creates a font file reference object from a local font file.
-An array of characters that contains the absolute file path for the font file. Subsequent operations on the constructed object may fail if the user provided filePath doesn't correspond to a valid file on the disk.
The last modified time of the input file path. If the parameter is omitted, the function will access the font file to obtain its last write time. You should specify this value to avoid extra disk access. Subsequent operations on the constructed object may fail if the user provided lastWriteTime doesn't match the file on the disk.
When this method returns, contains an address of a reference to the newly created font file reference object, or
If this method succeeds, it returns
Creates a reference to an application-specific font file resource.
-A font file reference key that uniquely identifies the font file resource during the lifetime of fontFileLoader.
The size of the font file reference key in bytes.
The font file loader that will be used by the font system to load data from the file identified by fontFileReferenceKey.
Contains an address of a reference to the newly created font file object when this method succeeds, or
If this method succeeds, it returns
This function is provided for cases when an application or a document needs to use a private font without having to install it on the system. fontFileReferenceKey has to be unique only in the scope of the fontFileLoader used in this call.
-Creates an object that represents a font face.
-A value that indicates the type of file format of the font face.
The number of font files, in element count, required to represent the font face.
A font file object representing the font face. Because
The zero-based index of a font face, in cases when the font files contain a collection of font faces. If the font files contain a single face, this value should be zero.
A value that indicates which, if any, font face simulation flags for algorithmic means of making text bold or italic are applied to the current font face.
When this method returns, contains an address of a reference to the newly created font face object, or
If this method succeeds, it returns
Creates an object that represents a font face.
-A value that indicates the type of file format of the font face.
The number of font files, in element count, required to represent the font face.
A font file object representing the font face. Because
The zero-based index of a font face, in cases when the font files contain a collection of font faces. If the font files contain a single face, this value should be zero.
A value that indicates which, if any, font face simulation flags for algorithmic means of making text bold or italic are applied to the current font face.
When this method returns, contains an address of a reference to the newly created font face object, or
If this method succeeds, it returns
Creates an object that represents a font face.
-A value that indicates the type of file format of the font face.
The number of font files, in element count, required to represent the font face.
A font file object representing the font face. Because
The zero-based index of a font face, in cases when the font files contain a collection of font faces. If the font files contain a single face, this value should be zero.
A value that indicates which, if any, font face simulation flags for algorithmic means of making text bold or italic are applied to the current font face.
When this method returns, contains an address of a reference to the newly created font face object, or
If this method succeeds, it returns
Creates a rendering parameters object with default settings for the primary monitor. Different monitors may have different rendering parameters, for more information see the How to Add Support for Multiple Monitors topic.
-Standard
Creates a rendering parameters object with default settings for the specified monitor. In most cases, this is the preferred way to create a rendering parameters object.
-A handle for the specified monitor.
When this method returns, contains an address of a reference to the rendering parameters object created by this method.
If this method succeeds, it returns
Creates a rendering parameters object with the specified properties.
-The gamma level to be set for the new rendering parameters object.
The enhanced contrast level to be set for the new rendering parameters object.
The ClearType level to be set for the new rendering parameters object.
Represents the internal structure of a device pixel (that is, the physical arrangement of red, green, and blue color components) that is assumed for purposes of rendering text.
A value that represents the method (for example, ClearType natural quality) for rendering glyphs.
When this method returns, contains an address of a reference to the newly created rendering parameters object.
If this method succeeds, it returns
Registers a font file loader with DirectWrite.
-Pointer to a
If this method succeeds, it returns
This function registers a font file loader with DirectWrite. The font file loader interface, which should be implemented by a singleton object, handles loading font file resources of a particular type from a key. A given instance can only be registered once. Succeeding attempts will return an error, indicating that it has already been registered. Note that font file loader implementations must not register themselves with DirectWrite inside their constructors, and must not unregister themselves inside their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration with DirectWrite of font file loaders should be performed outside of the font file loader implementation.
-Unregisters a font file loader that was previously registered with the DirectWrite font system using RegisterFontFileLoader.
-If this method succeeds, it returns
This function unregisters font file loader callbacks with the DirectWrite font system. You should implement the font file loader interface by a singleton object. Note that font file loader implementations must not register themselves with DirectWrite inside their constructors and must not unregister themselves in their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration of font file loaders with DirectWrite should be performed outside of the font file loader implementation.
-Creates a text format object used for text layout.
-An array of characters that contains the name of the font family
A reference to a font collection object. When this is
A value that indicates the font weight for the text object created by this method.
A value that indicates the font style for the text object created by this method.
A value that indicates the font stretch for the text object created by this method.
The logical size of the font in DIP ("device-independent pixel") units. A DIP equals 1/96 inch.
An array of characters that contains the locale name.
When this method returns, contains an address of a reference to a newly created text format object, or
If this method succeeds, it returns
Creates a typography object for use in a text layout.
-When this method returns, contains the address of a reference to a newly created typography object, or
If this method succeeds, it returns
Creates an object that is used for interoperability with GDI.
-When this method returns, contains an address of a reference to a GDI interop object if successful, or
If this method succeeds, it returns
Takes a string, text format, and associated constraints, and produces an object that represents the fully analyzed and formatted result.
-An array of characters that contains the string to create a new
The number of characters in the string.
A reference to an object that indicates the format to apply to the string.
The width of the layout box.
The height of the layout box.
When this method returns, contains an address of a reference to the resultant text layout object.
If this method succeeds, it returns
Takes a string, format, and associated constraints, and produces an object representing the result, formatted for a particular display resolution and measuring mode.
-An array of characters that contains the string to create a new
The length of the string, in character count.
The text formatting object to apply to the string.
The width of the layout box.
The height of the layout box.
The number of physical pixels per DIP (device independent pixel). For example, if rendering onto a 96 DPI device pixelsPerDip is 1. If rendering onto a 120 DPI device pixelsPerDip is 1.25 (120/96).
An optional transform applied to the glyphs and their positions. This transform is applied after the scaling specifies the font size and pixels per DIP.
Instructs the text layout to use the same metrics as GDI bi-level text when set to
When this method returns, contains an address to the reference of the resultant text layout object.
If this method succeeds, it returns
The resulting text layout should only be used for the intended resolution, and for cases where text scalability is desired CreateTextLayout should be used instead.
-Creates an inline object for trimming, using an ellipsis as the omission sign.
-A text format object, created with CreateTextFormat, used for text layout.
When this method returns, contains an address of a reference to the omission (that is, ellipsis trimming) sign created by this method.
If this method succeeds, it returns
The ellipsis will be created using the current settings of the format, including base font, style, and any effects. Alternate omission signs can be created by the application by implementing
Returns an interface for performing text analysis.
-When this method returns, contains an address of a reference to the newly created text analyzer object.
If this method succeeds, it returns
Creates a number substitution object using a locale name, substitution method, and an indicator whether to ignore user overrides (use NLS defaults for the given culture instead).
-A value that specifies how to apply number substitution on digits and related punctuation.
The name of the locale to be used in the numberSubstitution object.
A Boolean flag that indicates whether to ignore user overrides.
When this method returns, contains an address to a reference to the number substitution object created by this method.
If this method succeeds, it returns
Creates a glyph run analysis object, which encapsulates information used to render a glyph run.
-A structure that contains the properties of the glyph run (font face, advances, and so on).
Number of physical pixels per DIP (device independent pixel). For example, if rendering onto a 96 DPI bitmap then pixelsPerDip is 1. If rendering onto a 120 DPI bitmap then pixelsPerDip is 1.25.
Optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified the emSize and pixelsPerDip.
A value that specifies the rendering mode, which must be one of the raster rendering modes (that is, not default and not outline).
Specifies the measuring mode to use with glyphs.
The horizontal position (X-coordinate) of the baseline origin, in DIPs.
Vertical position (Y-coordinate) of the baseline origin, in DIPs.
When this method returns, contains an address of a reference to the newly created glyph run analysis object.
If this method succeeds, it returns
The glyph run analysis object contains the results of analyzing the glyph run, including the positions of all the glyphs and references to all of the rasterized glyphs in the font cache.
-Creates a rendering parameters object with the specified properties.
-The root factory interface for all DirectWrite objects.
-Gets a font collection representing the set of EUDC (end-user defined characters) fonts.
-The font collection to fill.
Whether to check for updates.
If this method succeeds, it returns
Note that if no EUDC is set on the system, the returned collection will be empty, meaning it will return success but GetFontFamilyCount will be zero.
-Creates a rendering parameters object with the specified properties.
-The gamma level to be set for the new rendering parameters object.
The enhanced contrast level to be set for the new rendering parameters object.
The amount of contrast enhancement to use for grayscale antialiasing, zero or greater.
The ClearType level to be set for the new rendering parameters object.
Represents the internal structure of a device pixel (that is, the physical arrangement of red, green, and blue color components) that is assumed for purposes of rendering text.
A value that represents the method (for example, ClearType natural quality) for rendering glyphs.
When this method returns, contains an address of a reference to the newly created rendering parameters object.
Standard
An object that encapsulates a set of fonts, such as the set of fonts installed on the system, or the set of fonts in a particular directory. The font collection API can be used to discover what font families and fonts are available, and to obtain some metadata about the fonts.
-The
* pFontCollection = null ; // Get the system font collection. - if (SUCCEEDED(hr)) - { hr = pDWriteFactory->GetSystemFontCollection(&pFontCollection); - } -
To determine what fonts are available on the system, get a reference to the system font collection. You can then use the
#include <dwrite.h> - #include <string.h> - #include <stdio.h> - #include <new> // SafeRelease inline function. - template <class T> inline void SafeRelease(T **ppT) - { if (*ppT) { (*ppT)->Release(); *ppT =-null ; } - } void wmain() - {* pDWriteFactory = null ;hr = ( , __uuidof( ), reinterpret_cast< **>(&pDWriteFactory) ); * pFontCollection = null ; // Get the system font collection. if (SUCCEEDED(hr)) { hr = pDWriteFactory->GetSystemFontCollection(&pFontCollection); } UINT32 familyCount = 0; // Get the number of font families in the collection. if (SUCCEEDED(hr)) { familyCount = pFontCollection->GetFontFamilyCount(); } for (UINT32 i = 0; i < familyCount; ++i) {* pFontFamily = null ; // Get the font family. if (SUCCEEDED(hr)) { hr = pFontCollection->GetFontFamily(i, &pFontFamily); }* pFamilyNames = null ; // Get a list of localized strings for the family name. if (SUCCEEDED(hr)) { hr = pFontFamily->GetFamilyNames(&pFamilyNames); } UINT32 index = 0;exists = false; wchar_t localeName[LOCALE_NAME_MAX_LENGTH]; if (SUCCEEDED(hr)) { // Get the default locale for this user. int defaultLocaleSuccess = GetUserDefaultLocaleName(localeName, LOCALE_NAME_MAX_LENGTH); // If the default locale is returned, find that locale name, otherwise use "en-us". if (defaultLocaleSuccess) { hr = pFamilyNames->FindLocaleName(localeName, &index, &exists); } if (SUCCEEDED(hr) && !exists) // if the above find did not find a match, retry with US English { hr = pFamilyNames->FindLocaleName(L"en-us", &index, &exists); } } // If the specified locale doesn't exist, select the first on the list. if (!exists) index = 0; UINT32 length = 0; // Get the string length. if (SUCCEEDED(hr)) { hr = pFamilyNames->GetStringLength(index, &length); } // Allocate a string big enough to hold the name. wchar_t* name = new (std::nothrow) wchar_t[length+1]; if (name == null ) { hr = E_OUTOFMEMORY; } // Get the family name. if (SUCCEEDED(hr)) { hr = pFamilyNames->GetString(index, name, length+1); } if (SUCCEEDED(hr)) { // Print out the family name. wprintf(L"%s\n", name); } SafeRelease(&pFontFamily); SafeRelease(&pFamilyNames); delete [] name; } SafeRelease(&pFontCollection); SafeRelease(&pDWriteFactory); - }
Gets the number of font families in the collection.
-Gets the number of font families in the collection.
-The number of font families in the collection.
Creates a font family object given a zero-based font family index.
-Zero-based index of the font family.
When this method returns, contains the address of a reference to the newly created font family object.
Finds the font family with the specified family name.
-An array of characters, which is null-terminated, containing the name of the font family. The name is not case-sensitive but must otherwise exactly match a family name in the collection.
When this method returns, contains the zero-based index of the matching font family if the family name was found; otherwise, UINT_MAX.
When this method returns, TRUE if the family name exists; otherwise,
Gets the font object that corresponds to the same physical font as the specified font face object. The specified physical font must belong to the font collection.
-A font face object that specifies the physical font.
When this method returns, contains the address of a reference to the newly created font object if successful; otherwise,
Used to construct a collection of fonts given a particular type of key.
-The font collection loader interface is recommended to be implemented by a singleton object. Note that font collection loader implementations must not register themselves with DirectWrite factory inside their constructors and must not unregister themselves in their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration of font file loaders with DirectWrite factory should be performed outside of the font file loader implementation as a separate step.
-Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Obtains the file format type of a font face.
-Obtains the index of a font face in the context of its font files.
-Obtains the algorithmic style simulation flags of a font face.
-Determines whether the font is a symbol font.
-Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-Obtains the number of glyphs in the font face.
-Obtains the file format type of a font face.
-A value that indicates the type of format for the font face (such as Type 1, TrueType, vector, or bitmap).
Obtains the font files representing a font face.
-If fontFiles is
When this method returns, contains a reference to a user-provided array that stores references to font files representing the font face. This parameter can be
If this method succeeds, it returns
The
Then, call the method a second time, passing the numberOfFiles value that was output the first call, and a non-null buffer of the correct size to store the
Obtains the index of a font face in the context of its font files.
-The zero-based index of a font face in cases when the font files contain a collection of font faces. If the font files contain a single face, this value is zero.
Obtains the algorithmic style simulation flags of a font face.
-Font face simulation flags for algorithmic means of making text bold or italic.
Determines whether the font is a symbol font.
-Returns TRUE if the font is a symbol font, otherwise
Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-When this method returns, a?
Obtains the number of glyphs in the font face.
-The number of glyphs in the font face.
Obtains ideal (resolution-independent) glyph metrics in font design units.
-An array of glyph indices for which to compute metrics. The array must contain at least as many elements as specified by glyphCount.
The number of elements in the glyphIndices array.
When this method returns, contains an array of
Indicates whether the font is being used in a sideways run. This can affect the glyph metrics if the font has oblique simulation because sideways oblique simulation differs from non-sideways oblique simulation
If this method succeeds, it returns
Design glyph metrics are used for glyph positioning.
-Returns the nominal mapping of UCS4 Unicode code points to glyph indices as defined by the font 'CMAP' table.
-An array of USC4 code points from which to obtain nominal glyph indices. The array must be allocated and be able to contain the number of elements specified by codePointCount.
The number of elements in the codePoints array.
When this method returns, contains a reference to an array of nominal glyph indices filled by this function.
If this method succeeds, it returns
Note that this mapping is primarily provided for line layout engines built on top of the physical font API. Because of OpenType glyph substitution and line layout character substitution, the nominal conversion does not always correspond to how a Unicode string will map to glyph indices when rendering using a particular font face. Also, note that Unicode variant selectors provide for alternate mappings for character to glyph. This call will always return the default variant.
When characters are not present in the font this method returns the index 0, which is the undefined glyph or ".notdef" glyph. If a character isn't in a font,
Finds the specified OpenType font table if it exists and returns a reference to it. The function accesses the underlying font data through the
If this method succeeds, it returns
The context for the same tag may be different for each call, so each one must be held and released separately.
-Releases the table obtained earlier from TryGetFontTable.
-Computes the outline of a run of glyphs by calling back to the outline sink interface.
-The logical size of the font in DIP units. A DIP ("device-independent pixel") equals 1/96 inch.
An array of glyph indices. The glyphs are in logical order and the advance direction depends on the isRightToLeft parameter. The array must be allocated and be able to contain the number of elements specified by glyphCount.
An optional array of glyph advances in DIPs. The advance of a glyph is the amount to advance the position (in the direction of the baseline) after drawing the glyph. glyphAdvances contains the number of elements specified by glyphCount.
An optional array of glyph offsets, each of which specifies the offset along the baseline and offset perpendicular to the baseline of a glyph relative to the current pen position. glyphOffsets contains the number of elements specified by glyphCount.
The number of glyphs in the run.
If TRUE, the ascender of the glyph runs alongside the baseline. If
A client can render a vertical run by setting isSideways to TRUE and rotating the resulting geometry 90 degrees to the right using a transform. The isSideways and isRightToLeft parameters cannot both be true.
The visual order of the glyphs. If this parameter is
A reference to the interface that is called back to perform outline drawing operations.
If this method succeeds, it returns
Determines the recommended rendering mode for the font, using the specified size and rendering parameters.
-The logical size of the font in DIP units. A DIP ("device-independent pixel") equals 1/96 inch.
The number of physical pixels per DIP. For example, if the DPI of the rendering surface is 96, this value is 1.0f. If the DPI is 120, this value is 120.0f/96.
The measuring method that will be used for glyphs in the font. Renderer implementations may choose different rendering modes for different measuring methods, for example:
A reference to an object that contains rendering settings such as gamma level, enhanced contrast, and ClearType level. This parameter is necessary in case the rendering parameters object overrides the rendering mode.
When this method returns, contains a value that indicates the recommended rendering mode to use.
Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a fontface and are used by applications for layout calculations.
-The logical size of the font in DIP units.
The number of physical pixels per DIP.
An optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified by the font size and pixelsPerDip.
A reference to a DWRITE_FONT_METRICS structure to fill in. The metrics returned by this function are in font design units.
Obtains glyph metrics in font design units with the return values compatible with what GDI would produce.
-The ogical size of the font in DIP units.
The number of physical pixels per DIP.
An optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified by the font size and pixelsPerDip.
When set to
An array of glyph indices for which to compute the metrics.
The number of elements in the glyphIndices array.
An array of
A
Standard
Allows you to access fallback fonts from the font list.
The
Determines an appropriate font to use to render the beginning range of text.
-The text source implementation holds the text and locale.
Starting position to analyze.
Length of the text to analyze.
Default font collection to use.
Family name of the base font. If you pass null, no matching will be done against the family.
The desired weight.
The desired style.
The desired stretch.
Length of text mapped to the mapped font. This will always be less than or equal to the text length and greater than zero (if the text length is non-zero) so the caller advances at least one character.
The font that should be used to render the first mappedLength characters of the text. If it returns
Scale factor to multiply the em size of the returned font by.
Determines an appropriate font to use to render the beginning range of text.
-The text source implementation holds the text and locale.
Starting position to analyze.
Length of the text to analyze.
Default font collection to use.
Family name of the base font. If you pass null, no matching will be done against the family.
The desired weight.
The desired style.
The desired stretch.
Length of text mapped to the mapped font. This will always be less than or equal to the text length and greater than zero (if the text length is non-zero) so the caller advances at least one character.
The font that should be used to render the first mappedLength characters of the text. If it returns
Scale factor to multiply the em size of the returned font by.
If this method succeeds, it returns
Specifies properties used to identify and execute typographic features in the current font face.
-A non-zero value generally enables the feature execution, while the zero value disables it. A feature requiring a selector uses this value to indicate the selector index.
The OpenType standard provides access to typographic features available in the font by means of a feature tag with the associated parameters. The OpenType feature tag is a 4-byte identifier of the registered name of a feature. For example, the 'kern' feature name tag is used to identify the 'Kerning' feature in OpenType font. Similarly, the OpenType feature tag for 'Standard Ligatures' and 'Fractions' is 'liga' and 'frac' respectively. Since a single run can be associated with more than one typographic features, the Text String API accepts typographic settings for a run as a list of features and are executed in the order they are specified.
The value of the tag member represents the OpenType name tag of the feature, while the param value represents additional parameter for the execution of the feature referred by the tag member. Both nameTag and parameter are stored as little endian, the same convention followed by GDI. Most features treat the Param value as a binary value that indicates whether to turn the execution of the feature on or off, with it being off by default in the majority of cases. Some features, however, treat this value as an integral value representing the integer index to the list of alternate results it may produce during the execution; for instance, the feature 'Stylistic Alternates' or 'salt' uses the parameter value as an index to the list of alternate substituting glyphs it could produce for a specified glyph.
-The feature OpenType name identifier.
The execution parameter of the feature.
Represents a font file. Applications such as font managers or font viewers can call
Obtains the reference to the reference key of a font file. The returned reference is valid until the font file object is released.
-When this method returns, contains an address of a reference to the font file reference key. Note that the reference value is only valid until the font file object it is obtained from is released. This parameter is passed uninitialized.
When this method returns, contains the size of the font file reference key in bytes. This parameter is passed uninitialized.
If this method succeeds, it returns
Obtains the file loader associated with a font file object.
-When this method returns, contains the address of a reference to the font file loader associated with the font file object.
If this method succeeds, it returns
Analyzes a file and returns whether it represents a font, and whether the font type is supported by the font system.
-TRUE if the font type is supported by the font system; otherwise,
When this method returns, contains a value that indicates the type of the font file. Note that even if isSupportedFontType is
When this method returns, contains a value that indicates the type of the font face. If fontFileType is not equal to
When this method returns, contains the number of font faces contained in the font file.
If this method succeeds, it returns
Advances to the next font file in the collection. When it is first created, the enumerator is positioned before the first element of the collection and the first call to MoveNext advances to the first file.
-Handles loading font file resources of a particular type from a font file reference key into a font file stream object.
-The font file loader interface is recommended to be implemented by a singleton object. Note that font file loader implementations must not register themselves with DirectWrite factory inside their constructors and must not unregister themselves in their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration of font file loaders with DirectWrite factory should be performed outside of the font file loader implementation as a separate step.
-Handles loading font file resources of a particular type from a font file reference key into a font file stream object.
-The font file loader interface is recommended to be implemented by a singleton object. Note that font file loader implementations must not register themselves with DirectWrite factory inside their constructors and must not unregister themselves in their destructors, because registration and unregistraton operations increment and decrement the object reference count respectively. Instead, registration and unregistration of font file loaders with DirectWrite factory should be performed outside of the font file loader implementation as a separate step.
-Creates a font file stream object that encapsulates an open file resource.
-A reference to a font file reference key that uniquely identifies the font file resource within the scope of the font loader being used. The buffer allocated for this key must at least be the size, in bytes, specified by fontFileReferenceKeySize.
The size of font file reference key, in bytes.
When this method returns, contains the address of a reference to the newly created
If this method succeeds, it returns
The resource is closed when the last reference to fontFileStream is released.
-Reads a fragment from a font file.
-Note that ReadFileFragment implementations must check whether the requested font file fragment is within the file bounds. Otherwise, an error should be returned from ReadFileFragment.
DirectWrite may invoke
Reads a fragment from a font file.
-Note that ReadFileFragment implementations must check whether the requested font file fragment is within the file bounds. Otherwise, an error should be returned from ReadFileFragment.
DirectWrite may invoke
Reads a fragment from a font file.
-When this method returns, contains an address of a reference to the start of the font file fragment. This parameter is passed uninitialized.
The offset of the fragment, in bytes, from the beginning of the font file.
The size of the file fragment, in bytes.
When this method returns, contains the address of a reference to a reference to the client-defined context to be passed to ReleaseFileFragment.
If this method succeeds, it returns
Note that ReadFileFragment implementations must check whether the requested font file fragment is within the file bounds. Otherwise, an error should be returned from ReadFileFragment.
DirectWrite may invoke
Releases a fragment from a file.
-A reference to the client-defined context of a font fragment returned from ReadFileFragment.
Obtains the total size of a file.
-When this method returns, contains the total size of the file.
If this method succeeds, it returns
Implementing GetFileSize() for asynchronously loaded font files may require downloading the complete file contents. Therefore, this method should be used only for operations that either require a complete font file to be loaded (for example, copying a font file) or that need to make decisions based on the value of the file size (for example, validation against a persisted file size).
-Obtains the last modified time of the file.
-When this method returns, contains the last modified time of the file in the format that represents the number of 100-nanosecond intervals since January 1, 1601 (UTC).
If this method succeeds, it returns
The "last modified time" is used by DirectWrite font selection algorithms to determine whether one font resource is more up to date than another one.
-Provides interoperability with GDI, such as methods to convert a font face to a
Creates a font object that matches the properties specified by the
A structure containing a GDI-compatible font description.
When this method returns, contains an address of a reference to a newly created
If this method succeeds, it returns
Initializes a
An
When this method returns, contains a structure that receives a GDI-compatible font description.
When this method returns, contains TRUE if the specified font object is part of the system font collection; otherwise,
If this method succeeds, it returns
The conversion to a
Initializes a
An
When this method returns, contains a reference to a structure that receives a GDI-compatible font description.
If this method succeeds, it returns
The conversion to a
Creates an
A handle to a device context into which a font has been selected. It is assumed that the client has already performed font mapping and that the font selected into the device context is the actual font to be used for rendering glyphs.
Contains an address of a reference to the newly created font face object, or
This function is intended for scenarios in which an application wants to use GDI and Uniscribe 1.x for text layout and shaping, but DirectWrite for final rendering. This function assumes the client is performing text output using glyph indexes.
-Creates an object that encapsulates a bitmap and memory DC (device context) which can be used for rendering glyphs.
-A handle to the optional device context used to create a compatible memory DC (device context).
The width of the bitmap render target.
The height of the bitmap render target.
When this method returns, contains an address of a reference to the newly created
Contains the information needed by renderers to draw glyph runs. All coordinates are in device independent pixels (DIPs).
-The physical font face object to draw with.
The logical size of the font in DIPs (equals 1/96 inch), not points.
The number of glyphs in the glyph run.
A reference to an array of indices to render for the glyph run.
A reference to an array containing glyph advance widths for the glyph run.
A reference to an array containing glyph offsets for the glyph run.
If true, specifies that glyphs are rotated 90 degrees to the left and vertical metrics are used. Vertical writing is achieved by specifying isSideways = true and rotating the entire run 90 degrees to the right via a rotate transform.
The implicit resolved bidi level of the run. Odd levels indicate right-to-left languages like Hebrew and Arabic, while even levels indicate left-to-right languages like English and Japanese (when written horizontally). For right-to-left languages, the text origin is on the right, and text should be drawn to the left.
Contains low-level information used to render a glyph run.
-The alpha texture can be a bi-level alpha texture or a ClearType alpha texture.
A bi-level alpha texture contains one byte per pixel, therefore the size of the buffer for a bi-level texture will be the area of the texture bounds, in bytes. Each byte in a bi-level alpha texture created by CreateAlphaTexture is either set to DWRITE_ALPHA_MAX (that is, 255) or zero.
A ClearType alpha texture contains three bytes per pixel, therefore the size of the buffer for a ClearType alpha texture is three times the area of the texture bounds, in bytes.
-Gets the bounding rectangle of the physical pixels affected by the glyph run.
-Specifies the type of texture requested. If a bi-level texture is requested, the bounding rectangle includes only bi-level glyphs. Otherwise, the bounding rectangle includes only antialiased glyphs.
When this method returns, contains the bounding rectangle of the physical pixels affected by the glyph run, or an empty rectangle if there are no glyphs of the specified texture type.
Creates an alpha texture of the specified type for glyphs within a specified bounding rectangle.
-A value that specifies the type of texture requested. This can be DWRITE_TEXTURE_BILEVEL_1x1 or
The bounding rectangle of the texture, which can be different than the bounding rectangle returned by GetAlphaTextureBounds.
When this method returns, contains the array of alpha values from the texture. The buffer allocated for this array must be at least the size of bufferSize.
The size of the alphaValues array, in bytes. The minimum size depends on the dimensions of the rectangle and the type of texture requested.
If this method succeeds, it returns
Gets alpha blending properties required for ClearType blending.
-An object that specifies the ClearType level and enhanced contrast, gamma, pixel geometry, and rendering mode. In most cases, the values returned by the output parameters of this method are based on the properties of this object, unless a GDI-compatible rendering mode was specified.
When this method returns, contains the gamma value to use for gamma correction.
When this method returns, contains the enhanced contrast value to be used for blending.
When this method returns, contains the ClearType level used in the alpha blending.
If this method succeeds, it returns
Contains additional properties related to those in
Wraps an application-defined inline graphic, allowing DWrite to query metrics as if the graphic were a glyph inline with the text.
-Wraps an application-defined inline graphic, allowing DWrite to query metrics as if the graphic were a glyph inline with the text.
- The application implemented rendering callback (
If this method succeeds, it returns
If this method succeeds, it returns
The overhangs should be returned relative to the reported size of the object (see
If this method succeeds, it returns
Layout uses this to determine the line-breaking behavior of the inline object among the text.
-When this method returns, contains a value which indicates the line-breaking condition between the object and the content immediately preceding it.
When this method returns, contains a value which indicates the line-breaking condition between the object and the content immediately following it.
If this method succeeds, it returns
Line breakpoint characteristics of a character.
-Indicates a breaking condition before the character.
Indicates a breaking condition after the character.
Indicates that the character is some form of whitespace, which may be meaningful for justification.
Indicates that the character is a soft hyphen, often used to indicate hyphenation points inside words.
Reserved for future use.
A built-in implementation of the
Obtains the absolute font file path from the font file reference key.
-The font file reference key that uniquely identifies the local font file within the scope of the font loader being used.
If this method succeeds, the absolute font file path from the font file reference key.
Obtains the last write time of the file from the font file reference key.
-The font file reference key that uniquely identifies the local font file within the scope of the font loader being used.
The time of the last font file modification.
Obtains the length of the absolute file path from the font file reference key.
-Font file reference key that uniquely identifies the local font file within the scope of the font loader being used.
Size of font file reference key in bytes.
Length of the file path string, not including the terminated
Obtains the absolute font file path from the font file reference key.
-The font file reference key that uniquely identifies the local font file within the scope of the font loader being used.
The size of font file reference key in bytes.
The character array that receives the local file path.
The length of the file path character array.
If this method succeeds, it returns
Obtains the last write time of the file from the font file reference key.
-The font file reference key that uniquely identifies the local font file within the scope of the font loader being used.
The size of font file reference key in bytes.
The time of the last font file modification.
Represents a collection of strings indexed by locale name.
-The set of strings represented by an
A common use for the
-* pFamilyNames = null ; // Get a list of localized strings for the family name. - if (SUCCEEDED(hr)) - { hr = pFontFamily->GetFamilyNames(&pFamilyNames); - } UINT32 index = 0; -exists = false; wchar_t localeName[LOCALE_NAME_MAX_LENGTH]; if (SUCCEEDED(hr)) - { // Get the default locale for this user. int defaultLocaleSuccess = GetUserDefaultLocaleName(localeName, LOCALE_NAME_MAX_LENGTH); // If the default locale is returned, find that locale name, otherwise use "en-us". if (defaultLocaleSuccess) { hr = pFamilyNames->FindLocaleName(localeName, &index, &exists); } if (SUCCEEDED(hr) && !exists) // if the above find did not find a match, retry with US English { hr = pFamilyNames->FindLocaleName(L"en-us", &index, &exists); } - } // If the specified locale doesn't exist, select the first on the list. - if (!exists) index = 0; UINT32 length = 0; // Get the string length. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetStringLength(index, &length); - } // Allocate a string big enough to hold the name. - wchar_t* name = new (std::nothrow) wchar_t[length+1]; - if (name == null ) - { hr = E_OUTOFMEMORY; - } // Get the family name. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetString(index, name, length+1); - } -
Gets the number of language/string pairs.
-Gets the number of language/string pairs.
-The number of language/string pairs.
Gets the zero-based index of the locale name/string pair with the specified locale name.
-A null-terminated array of characters containing the locale name to look for.
The zero-based index of the locale name/string pair. This method initializes index to UINT_MAX.
When this method returns, contains TRUE if the locale name exists; otherwise,
Note that if the locale name does not exist, the return value is a success and the exists parameter is
UINT32 index = 0; --exists = false; wchar_t localeName[LOCALE_NAME_MAX_LENGTH]; if (SUCCEEDED(hr)) - { // Get the default locale for this user. int defaultLocaleSuccess = GetUserDefaultLocaleName(localeName, LOCALE_NAME_MAX_LENGTH); // If the default locale is returned, find that locale name, otherwise use "en-us". if (defaultLocaleSuccess) { hr = pFamilyNames->FindLocaleName(localeName, &index, &exists); } if (SUCCEEDED(hr) && !exists) // if the above find did not find a match, retry with US English { hr = pFamilyNames->FindLocaleName(L"en-us", &index, &exists); } - } // If the specified locale doesn't exist, select the first on the list. - if (!exists) index = 0; -
Gets the length in characters (not including the null terminator) of the locale name with the specified index.
-Zero-based index of the locale name to be retrieved.
When this method returns, contains the length in characters of the locale name, not including the null terminator.
If this method succeeds, it returns
Copies the locale name with the specified index to the specified array.
-Zero-based index of the locale name to be retrieved.
When this method returns, contains a character array, which is null-terminated, that receives the locale name from the language/string pair. The buffer allocated for this array must be at least the size of size, in element count.
The size of the array in characters. The size must include space for the terminating null character.
If this method succeeds, it returns
Gets the length in characters (not including the null terminator) of the string with the specified index.
-A zero-based index of the language/string pair.
The length in characters of the string, not including the null terminator, from the language/string pair.
If this method succeeds, it returns
Use GetStringLength to get the string length before calling the
UINT32 length = 0; // Get the string length. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetStringLength(index, &length); - } // Allocate a string big enough to hold the name. - wchar_t* name = new (std::nothrow) wchar_t[length+1]; - if (name ==-null ) - { hr = E_OUTOFMEMORY; - } // Get the family name. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetString(index, name, length+1); - } -
Copies the string with the specified index to the specified array.
-The zero-based index of the language/string pair to be examined.
The null terminated array of characters that receives the string from the language/string pair. The buffer allocated for this array should be at least the size of size. GetStringLength can be used to get the size of the array before using this method.
The size of the array in characters. The size must include space for the terminating null character. GetStringLength can be used to get the size of the array before using this method.
If this method succeeds, it returns
The string returned must be allocated by the caller. You can get the size of the string by using the GetStringLength method prior to calling GetString, as shown in the following example.
UINT32 length = 0; // Get the string length. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetStringLength(index, &length); - } // Allocate a string big enough to hold the name. - wchar_t* name = new (std::nothrow) wchar_t[length+1]; - if (name ==-null ) - { hr = E_OUTOFMEMORY; - } // Get the family name. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetString(index, name, length+1); - } -
Holds the appropriate digits and numeric punctuation for a specified locale.
-Defines the pixel snapping properties such as pixels per DIP(device-independent pixel) and the current transform matrix of a text renderer.
-Represents text rendering settings such as ClearType level, enhanced contrast, and gamma correction for glyph rasterization and filtering.
An application typically obtains a rendering parameters object by calling the
Gets the gamma value used for gamma correction. Valid values must be greater than zero and cannot exceed 256.
-The gamma value is used for gamma correction, which compensates for the non-linear luminosity response of most monitors.
-Gets the enhanced contrast property of the rendering parameters object. Valid values are greater than or equal to zero.
-Enhanced contrast is the amount to increase the darkness of text, and typically ranges from 0 to 1. Zero means no contrast enhancement.
-Gets the ClearType level of the rendering parameters object.
-The ClearType level represents the amount of ClearType ? that is, the degree to which the red, green, and blue subpixels of each pixel are treated differently. Valid values range from zero (meaning no ClearType, which is equivalent to grayscale anti-aliasing) to one (meaning full ClearType)
-Gets the pixel geometry of the rendering parameters object.
-Gets the rendering mode of the rendering parameters object.
-By default, the rendering mode is initialized to
Gets the gamma value used for gamma correction. Valid values must be greater than zero and cannot exceed 256.
-Returns the gamma value used for gamma correction. Valid values must be greater than zero and cannot exceed 256.
The gamma value is used for gamma correction, which compensates for the non-linear luminosity response of most monitors.
-Gets the enhanced contrast property of the rendering parameters object. Valid values are greater than or equal to zero.
-Returns the amount of contrast enhancement. Valid values are greater than or equal to zero.
Enhanced contrast is the amount to increase the darkness of text, and typically ranges from 0 to 1. Zero means no contrast enhancement.
-Gets the ClearType level of the rendering parameters object.
-The ClearType level of the rendering parameters object.
The ClearType level represents the amount of ClearType ? that is, the degree to which the red, green, and blue subpixels of each pixel are treated differently. Valid values range from zero (meaning no ClearType, which is equivalent to grayscale anti-aliasing) to one (meaning full ClearType)
-Gets the pixel geometry of the rendering parameters object.
-A value that indicates the type of pixel geometry used in the rendering parameters object.
Gets the rendering mode of the rendering parameters object.
-A value that indicates the rendering mode of the rendering parameters object.
By default, the rendering mode is initialized to
Contains shaping output properties for an output glyph.
-Indicates that the glyph has justification applied.
Indicates that the glyph is the start of a cluster.
Indicates that the glyph is a diacritic mark.
Indicates that the glyph is a word boundary with no visible space.
Reserved for future use.
This interface is implemented by the text analyzer's client to receive the output of a given text analysis.
-The text analyzer disregards any current state of the analysis sink, therefore, a Set method call on a range overwrites the previously set analysis result of the same range.
-The interface you implement to receive the output of the text analyzers.
-The text analyzer calls back to this to report the actual orientation of each character for shaping and drawing.
-The starting position to report from.
Number of UTF-16 units of the reported range.
A
The adjusted bidi level to be used by the client layout for reordering runs. This will differ from the resolved bidi level retrieved from the source for cases such as Arabic stacked top-to-bottom, where the glyphs are still shaped as RTL, but the runs are TTB along with any CJK or Latin.
Whether the glyphs are rotated on their side, which is the default case for CJK and the case stacked Latin
Whether the script should be shaped as right-to-left. For Arabic stacked top-to-bottom, even when the adjusted bidi level is coerced to an even level, this will still be true.
Returns a successful code or an error code to abort analysis.
Implemented by the text analyzer's client to provide text to the analyzer. It allows the separation between the logical view of text as a continuous stream of characters identifiable by unique text positions, and the actual memory layout of potentially discrete blocks of text in the client's backing store.
-If any of these callbacks returns an error, then the analysis functions will stop prematurely and return a callback error. Note that rather than return E_NOTIMPL, an application should stub the method and return a constant/null and
The interface you implement to provide needed information to the text analyzer, like the text and associated text properties.
Note?? If any of these callbacks return an error, the analysis functions will stop prematurely and return a callback error.? -
Used by the text analyzer to obtain the desired glyph orientation and resolved bidi level.
-The text position.
A reference to the text length.
A
A reference to the resolved bidi level.
Returning an error will abort the analysis.
The text analyzer calls back to this to get the desired glyph orientation and resolved bidi level, which it uses along with the script properties of the text to determine the actual orientation of each character, which it reports back to the client via the sink SetGlyphOrientation method.
-Analyzes various text properties for complex script processing such as bidirectional (bidi) support for languages like Arabic, determination of line break opportunities, glyph placement, and number substitution.
-Analyzes a text range for script boundaries, reading text attributes from the source and reporting the Unicode script ID to the sink callback SetScript.
-If this method succeeds, it returns
Analyzes a text range for script directionality, reading attributes from the source and reporting levels to the sink callback SetBidiLevel.
-If this method succeeds, it returns
While the function can handle multiple paragraphs, the text range should not arbitrarily split the middle of paragraphs. Otherwise, the returned levels may be wrong, because the Bidi algorithm is meant to apply to the paragraph as a whole.
-Analyzes a text range for spans where number substitution is applicable, reading attributes from the source and reporting substitutable ranges to the sink callback SetNumberSubstitution.
-If this method succeeds, it returns
Although the function can handle multiple ranges of differing number substitutions, the text ranges should not arbitrarily split the middle of numbers. Otherwise, it will treat the numbers separately and will not translate any intervening punctuation.
-Analyzes a text range for potential breakpoint opportunities, reading attributes from the source and reporting breakpoint opportunities to the sink callback SetLineBreakpoints.
-If this method succeeds, it returns
Although the function can handle multiple paragraphs, the text range should not arbitrarily split the middle of paragraphs, unless the specified text span is considered a whole unit. Otherwise, the returned properties for the first and last characters will inappropriately allow breaks.
-Parses the input text string and maps it to the set of glyphs and associated glyph data according to the font and the writing system's rendering rules.
-An array of characters to convert to glyphs.
The length of textString.
The font face that is the source of the output glyphs.
A Boolean flag set to TRUE if the text is intended to be drawn vertically.
A Boolean flag set to TRUE for right-to-left text.
A reference to a Script analysis result from an AnalyzeScript call.
The locale to use when selecting glyphs. For example the same character may map to different glyphs for ja-jp versus zh-chs. If this is
A reference to an optional number substitution which selects the appropriate glyphs for digits and related numeric characters, depending on the results obtained from AnalyzeNumberSubstitution. Passing
An array of references to the sets of typographic features to use in each feature range.
The length of each feature range, in characters. The sum of all lengths should be equal to textLength.
The number of feature ranges.
The maximum number of glyphs that can be returned.
When this method returns, contains the mapping from character ranges to glyph ranges.
When this method returns, contains a reference to an array of structures that contains shaping properties for each character.
The output glyph indices.
When this method returns, contains a reference to an array of structures that contain shaping properties for each output glyph.
When this method returns, contains the actual number of glyphs returned if the call succeeds.
If this method succeeds, it returns
Note that the mapping from characters to glyphs is, in general, many-to-many. The recommended estimate for the per-glyph output buffers is (3 * textLength / 2 + 16). This is not guaranteed to be sufficient. The value of the actualGlyphCount parameter is only valid if the call succeeds. In the event that maxGlyphCount is not big enough, HRESULT_FROM_WIN32(
Places glyphs output from the GetGlyphs method according to the font and the writing system's rendering rules.
-If this method succeeds, it returns
Place glyphs output from the GetGlyphs method according to the font and the writing system's rendering rules.
-If this method succeeds, it returns
Analyzes various text properties for complex script processing.
-Analyzes a text range for script orientation, reading text and attributes from the source and reporting results to the sink.
-Source object to analyze.
Starting position within the source object.
Length to analyze.
Length to analyze.
If this method succeeds, it returns
Applies spacing between characters, properly adjusting glyph clusters and diacritics.
-The spacing before each character, in reading order.
The spacing after each character, in reading order.
The minimum advance of each character, to prevent characters from becoming too thin or zero-width. This must be zero or greater.
The length of the clustermap and original text.
The number of glyphs.
Mapping from character ranges to glyph ranges.
The advance width of each glyph.
The offset of the origin of each glyph.
Properties of each glyph, from GetGlyphs.
The new advance width of each glyph.
The new offset of the origin of each glyph.
If this method succeeds, it returns
The input and output advances/offsets are allowed to alias the same array.
-Retrieves the given baseline from the font.
-The font face to read.
A
Whether the baseline is vertical or horizontal.
Simulate the baseline if it is missing in the font.
Script analysis result from AnalyzeScript.
Note??You can pass an empty script analysis structure, like this scriptAnalysis = {};
, and this method will return the default baseline. ? The language of the run.
The baseline coordinate value in design units.
Whether the returned baseline exists in the font.
If this method succeeds, it returns
If the baseline does not exist in the font, it is not considered an error, but the function will return exists = false. You may then use heuristics to calculate the missing base, or, if the flag simulationAllowed is true, the function will compute a reasonable approximation for you.
-Analyzes a text range for script orientation, reading text and attributes from the source and reporting results to the sink callback SetGlyphOrientation.
-If this method succeeds, it returns
Returns 2x3 transform matrix for the respective angle to draw the glyph run.
-A
Whether the run's glyphs are sideways or not.
Returned transform.
If this method succeeds, it returns
The translation component of the transform returned is zero.
-Retrieves the properties for a given script.
-The script for a run of text returned from
A reference to a
Returns properties for the given script. If the script is invalid, it returns generic properties for the unknown script and E_INVALIDARG.
Determines the complexity of text, and whether you need to call
If this method succeeds, it returns
Text is not simple if the characters are part of a script that has complex shaping requirements, require bidi analysis, combine with other characters, reside in the supplementary planes, or have glyphs that participate in standard OpenType features. The length returned will not split combining marks from their base characters.
-Retrieves justification opportunity information for each of the glyphs given the text and shaping glyph properties.
-Font face that was used for shaping. This is mainly important for returning correct results of the kashida width.
May be
Font em size used for the glyph run.
Script of the text from the itemizer.
Length of the text.
Number of glyphs.
Characters used to produce the glyphs.
Clustermap produced from shaping.
Glyph properties produced from shaping.
A reference to a
If this method succeeds, it returns
This function is called per-run, after shaping is done via the
Justifies an array of glyph advances to fit the line width.
-The line width.
The glyph count.
A reference to a
An array of glyph advances.
An array of glyph offsets.
The returned array of justified glyph advances.
The returned array of justified glyph offsets.
If this method succeeds, it returns
You call JustifyGlyphAdvances after you call
Fills in new glyphs for complex scripts where justification increased the advances of glyphs, such as Arabic with kashida.
-Font face used for shaping.
May be
Font em size used for the glyph run.
Script of the text from the itemizer.
Length of the text.
Number of glyphs.
Maximum number of output glyphs allocated by caller.
Clustermap produced from shaping.
Original glyphs produced from shaping.
Original glyph advances produced from shaping.
Justified glyph advances from
Justified glyph offsets from
Properties of each glyph, from
The new glyph count written to the modified arrays, or the needed glyph count if the size is not large enough.
Updated clustermap.
Updated glyphs with new glyphs inserted where needed.
Updated glyph advances.
Updated glyph offsets.
If this method succeeds, it returns
You call GetJustifiedGlyphs after the line has been justified, and it is per-run.
You should call GetJustifiedGlyphs if
Use GetJustifiedGlyphs mainly for cursive scripts like Arabic. If maxGlyphCount is not large enough, GetJustifiedGlyphs returns the error E_NOT_SUFFICIENT_BUFFER and fills the variable to which actualGlyphCount points with the needed glyph count.
- The
To get a reference to the
if (SUCCEEDED(hr)) - { hr = pDWriteFactory_->CreateTextFormat( L"Gabriola",null ,, , , 72.0f, L"en-us", &pTextFormat_ ); - }
When creating an
These properties cannot be changed after the
The
To draw text with multiple formats, or to use a custom text renderer, use the
This object may not be thread-safe, and it may carry the state of text format change.
-Sets trimming options for text overflowing the layout width.
-Text trimming options.
Application-defined omission sign. This parameter may be
If this method succeeds, it returns
Gets or sets the alignment option of text relative to the layout box's leading and trailing edge.
-Gets or sets the alignment option of a paragraph which is relative to the top and bottom edges of a layout box.
-Gets or sets the word wrapping option.
-Gets or sets the current reading direction for text in a paragraph.
-Gets or sets the direction that text lines flow.
-Gets or sets the incremental tab stop position.
-Gets the current font collection.
-Gets the font weight of the text.
-Gets the font style of the text.
-Gets the font stretch of the text.
-Gets the font size in DIP unites.
-Sets the alignment of text in a paragraph, relative to the leading and trailing edge of a layout box for a
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The textAlignment argument is invalid. |
?
The text can be aligned to the leading or trailing edge of the layout box, or it can be centered. The following illustration shows text with the alignment set to
See
Sets the alignment option of a paragraph relative to the layout box's top and bottom edge.
-The paragraph alignment option being set for a paragraph; see
If this method succeeds, it returns
Sets the word wrapping option.
-The word wrapping option being set for a paragraph; see
If this method succeeds, it returns
Sets the paragraph reading direction.
- The text reading direction (for example,
If this method succeeds, it returns
The reading direction and flow direction must always be set 90 degrees orthogonal to each other, or else you will get the error DWRITE_E_FLOWDIRECTIONCONFLICTS when you use layout functions like Draw or GetMetrics. So if you set a vertical reading direction (for example, to
Sets the paragraph flow direction.
-The paragraph flow direction; see
If this method succeeds, it returns
Sets a fixed distance between two adjacent tab stops.
-The fixed distance between two adjacent tab stops.
If this method succeeds, it returns
Sets trimming options for text overflowing the layout width.
-Text trimming options.
Application-defined omission sign. This parameter may be
If this method succeeds, it returns
Sets the line spacing.
-Specifies how line height is being determined; see
The line height, or distance between one baseline to another.
The distance from top of line to baseline. A reasonable ratio to lineSpacing is 80 percent.
If this method succeeds, it returns
For the default method, spacing depends solely on the content. For uniform spacing, the specified line height overrides the content.
-Gets the alignment option of text relative to the layout box's leading and trailing edge.
-Returns the text alignment option of the current paragraph.
Gets the alignment option of a paragraph which is relative to the top and bottom edges of a layout box.
-A value that indicates the current paragraph alignment option.
Gets the word wrapping option.
-Returns the word wrapping option; see
Gets the current reading direction for text in a paragraph.
-A value that indicates the current reading direction for text in a paragraph.
Gets the direction that text lines flow.
-The direction that text lines flow within their parent container. For example,
Gets the incremental tab stop position.
-The incremental tab stop value.
Gets the trimming options for text that overflows the layout box.
-When this method returns, it contains a reference to a
When this method returns, contains an address of a reference to a trimming omission sign. This parameter may be
If this method succeeds, it returns
Gets the line spacing adjustment set for a multiline text paragraph.
-A value that indicates how line height is determined.
When this method returns, contains the line height, or distance between one baseline to another.
When this method returns, contains the distance from top of line to baseline. A reasonable ratio to lineSpacing is 80 percent.
If this method succeeds, it returns
Gets the current font collection.
-When this method returns, contains an address of a reference to the font collection being used for the current text.
If this method succeeds, it returns
Gets the length of the font family name.
-The size of the character array, in character count, not including the terminated
Gets a copy of the font family name.
-When this method returns, contains a reference to a character array, which is null-terminated, that receives the current font family name. The buffer allocated for this array should be at least the size, in elements, of nameSize.
The size of the fontFamilyName character array, in character count, including the terminated
If this method succeeds, it returns
Gets the font weight of the text.
-A value that indicates the type of weight (such as normal, bold, or black).
Gets the font style of the text.
-A value which indicates the type of font style (such as slope or incline).
Gets the font stretch of the text.
-A value which indicates the type of font stretch (such as normal or condensed).
Gets the font size in DIP unites.
-The current font size in DIP units.
Gets the length of the locale name.
-The size of the character array in character count, not including the terminated
Gets a copy of the locale name.
-Contains a character array that receives the current locale name.
The size of the character array, in character count, including the terminated
If this method succeeds, it returns
The
To get a reference to the
// Create a text layout using the text format. - if (SUCCEEDED(hr)) - {rect; GetClientRect(hwnd_, &rect); float width = rect.right / dpiScaleX_; float height = rect.bottom / dpiScaleY_; hr = pDWriteFactory_->CreateTextLayout( wszText_, // The string to be laid out and formatted. cTextLength_, // The length of the string. pTextFormat_, // The text format to apply to the string (contains font information, etc). width, // The width of the layout box. height, // The height of the layout box. &pTextLayout_ // The interface reference. ); - }
The
// Set the font weight to bold for the first 5 letters. -textRange = {0, 4}; if (SUCCEEDED(hr)) - { hr = pTextLayout_->SetFontWeight( , textRange); - }
To draw the block of text represented by an
Gets or sets the layout maximum width.
-Gets or sets the layout maximum height.
-Retrieves overall metrics for the formatted string.
-Returns the overhangs (in DIPs) of the layout and all objects contained in it, including text glyphs and inline objects.
-Underlines and strikethroughs do not contribute to the black box determination, since these are actually drawn by the renderer, which is allowed to draw them in any variety of styles.
-Sets the layout maximum width.
-A value that indicates the maximum width of the layout box.
If this method succeeds, it returns
Sets the layout maximum height.
-A value that indicates the maximum height of the layout box.
If this method succeeds, it returns
Sets the font collection.
-The font collection to set.
Text range to which this change applies.
If this method succeeds, it returns
Sets null-terminated font family name for text within a specified text range.
-The font family name that applies to the entire text string within the range specified by textRange.
Text range to which this change applies.
If this method succeeds, it returns
Sets the font weight for text within a text range specified by a
If this method succeeds, it returns
The font weight can be set to one of the predefined font weight values provided in the
The following illustration shows an example of Normal and UltraBold weights for the Palatino Linotype typeface.
- Sets the font style for text within a text range specified by a
If this method succeeds, it returns
The font style can be set to Normal, Italic or Oblique. The following illustration shows three styles for the Palatino font. For more information, see
Sets the font stretch for text within a specified text range.
-A value which indicates the type of font stretch for text within the range specified by textRange.
Text range to which this change applies.
If this method succeeds, it returns
Sets the font size in DIP units for text within a specified text range.
-The font size in DIP units to be set for text in the range specified by textRange.
Text range to which this change applies.
If this method succeeds, it returns
The
To get a reference to the
// Create a text layout using the text format. - if (SUCCEEDED(hr)) - {rect; GetClientRect(hwnd_, &rect); float width = rect.right / dpiScaleX_; float height = rect.bottom / dpiScaleY_; hr = pDWriteFactory_->CreateTextLayout( wszText_, // The string to be laid out and formatted. cTextLength_, // The length of the string. pTextFormat_, // The text format to apply to the string (contains font information, etc). width, // The width of the layout box. height, // The height of the layout box. &pTextLayout_ // The interface reference. ); - }
The
// Set the font weight to bold for the first 5 letters. -textRange = {0, 4}; if (SUCCEEDED(hr)) - { hr = pTextLayout_->SetFontWeight( , textRange); - }
To draw the block of text represented by an
Sets strikethrough for text within a specified text range.
-A Boolean flag that indicates whether strikethrough takes place in the range specified by textRange.
Text range to which this change applies.
If this method succeeds, it returns
Sets the application-defined drawing effect.
-Application-defined drawing effects that apply to the range. This data object will be passed back to the application's drawing callbacks for final rendering.
The text range to which this change applies.
If this method succeeds, it returns
An
This drawing effect is associated with the specified range and will be passed back to the application by way of the callback when the range is drawn at drawing time.
-Sets the inline object.
-An application-defined inline object.
Text range to which this change applies.
If this method succeeds, it returns
The application may call this function to specify the set of properties describing an application-defined inline object for specific range.
This inline object applies to the specified range and will be passed back to the application by way of the DrawInlineObject callback when the range is drawn. Any text in that range will be suppressed.
-Sets font typography features for text within a specified text range.
-Pointer to font typography settings.
Text range to which this change applies.
If this method succeeds, it returns
Sets the locale name for text within a specified text range.
-A null-terminated locale name string.
Text range to which this change applies.
If this method succeeds, it returns
Gets the layout maximum width.
-Returns the layout maximum width.
Gets the layout maximum height.
-The layout maximum height.
Gets the font collection associated with the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the underline.
Contains an address of a reference to the current font collection.
Get the length of the font family name at the current position.
-The current text position.
When this method returns, contains the size of the character array containing the font family name, in character count, not including the terminated
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font family.
If this method succeeds, it returns
Copies the font family name of the text at the specified position.
-The position of the text to examine.
When this method returns, contains an array of characters that receives the current font family name. You must allocate storage for this parameter.
The size of the character array in character count including the terminated
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font family name.
If this method succeeds, it returns
Gets the font weight of the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font weight.
When this method returns, contains a value which indicates the type of font weight being applied at the specified position.
Gets the font style (also known as slope) of the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font style.
When this method returns, contains a value which indicates the type of font style (also known as slope or incline) being applied at the specified position.
Gets the font stretch of the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font stretch.
When this method returns, contains a value which indicates the type of font stretch (also known as width) being applied at the specified position.
Gets the font em height of the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the font size.
When this method returns, contains the size of the font in ems of the text at the specified position.
Gets the underline presence of the text at the specified position.
-The current text position.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the underline.
A Boolean flag that indicates whether underline is present at the position indicated by currentPosition.
Get the strikethrough presence of the text at the specified position.
-The position of the text to inspect.
Contains the range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to strikethrough.
A Boolean flag that indicates whether strikethrough is present at the position indicated by currentPosition.
Gets the application-defined drawing effect at the specified text position.
-The position of the text whose drawing effect is to be retrieved.
Contains the range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the drawing effect.
When this method returns, contains an address of a reference to the current application-defined drawing effect. Usually this effect is a foreground brush that is used in glyph drawing.
Gets the inline object at the specified position.
-The specified text position.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the inline object.
Contains the application-defined inline object.
Gets the typography setting of the text at the specified position.
-The position of the text to inspect.
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the typography.
When this method returns, contains an address of a reference to the current typography setting.
Gets the length of the locale name of the text at the specified position.
-The position of the text to inspect.
Size of the character array, in character count, not including the terminated
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the locale name.
If this method succeeds, it returns
Gets the locale name of the text at the specified position.
-The position of the text to inspect.
When this method returns, contains the character array receiving the current locale name.
Size of the character array, in character count, including the terminated
The range of text that has the same formatting as the text at the position specified by currentPosition. This means the run has the exact formatting as the position specified, including but not limited to the locale name.
If this method succeeds, it returns
Draws text using the specified client drawing context.
-An application-defined drawing context.
Pointer to the set of callback functions used to draw parts of a text string.
The x-coordinate of the layout's left side.
The y-coordinate of the layout's top side.
If this method succeeds, it returns
To draw text with this method, a textLayout object needs to be created by the application using
After the textLayout object is obtained, the application calls the
If you set a vertical text reading direction on
Retrieves the information about each individual text line of the text string.
-When this method returns, contains a reference to an array of structures containing various calculated length values of individual text lines.
The maximum size of the lineMetrics array.
When this method returns, contains the actual size of the lineMetrics array that is needed.
If this method succeeds, it returns
If maxLineCount is not large enough E_NOT_SUFFICIENT_BUFFER, which is equivalent to HRESULT_FROM_WIN32(
Retrieves overall metrics for the formatted string.
-When this method returns, contains the measured distances of text and associated content after being formatted.
If this method succeeds, it returns
Returns the overhangs (in DIPs) of the layout and all objects contained in it, including text glyphs and inline objects.
-Overshoots of visible extents (in DIPs) outside the layout.
If this method succeeds, it returns
Underlines and strikethroughs do not contribute to the black box determination, since these are actually drawn by the renderer, which is allowed to draw them in any variety of styles.
-Retrieves logical properties and measurements of each glyph cluster.
-When this method returns, contains metrics, such as line-break or total advance width, for a glyph cluster.
The maximum size of the clusterMetrics array.
When this method returns, contains the actual size of the clusterMetrics array that is needed.
If this method succeeds, it returns
If maxClusterCount is not large enough, then E_NOT_SUFFICIENT_BUFFER, which is equivalent to HRESULT_FROM_WIN32(
Determines the minimum possible width the layout can be set to without emergency breaking between the characters of whole words occurring.
-Minimum width.
The application calls this function passing in a specific pixel location relative to the top-left location of the layout box and obtains the information about the correspondent hit-test metrics of the text string where the hit-test has occurred. When the specified pixel location is outside the text string, the function sets the output value *isInside to
The pixel location X to hit-test, relative to the top-left location of the layout box.
The pixel location Y to hit-test, relative to the top-left location of the layout box.
An output flag that indicates whether the hit-test location is at the leading or the trailing side of the character. When the output *isInside value is set to
An output flag that indicates whether the hit-test location is inside the text string. When
The output geometry fully enclosing the hit-test location. When the output *isInside value is set to
The application calls this function to get the pixel location relative to the top-left of the layout box given the text position and the logical side of the position. This function is normally used as part of caret positioning of text where the caret is drawn at the location corresponding to the current text editing position. It may also be used as a way to programmatically obtain the geometry of a particular text position in UI automation.
-The text position used to get the pixel location.
A Boolean flag that indicates whether the pixel location is of the leading or the trailing side of the specified text position.
When this method returns, contains the output pixel location X, relative to the top-left location of the layout box.
When this method returns, contains the output pixel location Y, relative to the top-left location of the layout box.
When this method returns, contains the output geometry fully enclosing the specified text position.
The application calls this function to get a set of hit-test metrics corresponding to a range of text positions. One of the main usages is to implement highlight selection of the text string. The function returns E_NOT_SUFFICIENT_BUFFER, which is equivalent to HRESULT_FROM_WIN32(
If this method succeeds, it returns
Specifies a range of text positions where format is applied in the text represented by an
Represents a set of application-defined callbacks that perform rendering of text, inline objects, and decorations such as underlines.
-Represents a font typography setting.
-Gets the number of OpenType font features for the current font.
-A single run of text can be associated with more than one typographic feature. The
Adds an OpenType font feature.
-A structure that contains the OpenType name identifier and the execution parameter for the font feature being added.
If this method succeeds, it returns
Gets the number of OpenType font features for the current font.
-The number of font features for the current text format.
A single run of text can be associated with more than one typographic feature. The
Gets the font feature at the specified index.
-The zero-based index of the font feature to retrieve.
When this method returns, contains the font feature which is at the specified index.
A single run of text can be associated with more than one typographic feature. The
The
The Roman baseline for horizontal; the Central baseline for vertical.
The baseline that is used by alphabetic scripts such as Latin, Greek, and Cyrillic.
Central baseline, which is generally used for vertical text.
Mathematical baseline, which math characters are centered on.
Hanging baseline, which is used in scripts like Devanagari.
Ideographic bottom baseline for CJK, left in vertical.
Ideographic top baseline for CJK, right in vertical.
The bottom-most extent in horizontal, left-most in vertical.
The top-most extent in horizontal, right-most in vertical.
Indicates the condition at the edges of inline object or text used to determine line-breaking behavior.
-Indicates whether a break is allowed by determining the condition of the neighboring text span or inline object.
Indicates that a line break is allowed, unless overruled by the condition of the neighboring text span or inline object, either prohibited by a "may not break" condition or forced by a "must break" condition.
Indicates that there should be no line break, unless overruled by a "must break" condition from the neighboring text span or inline object.
Indicates that the line break must happen, regardless of the condition of the adjacent text span or inline object.
Represents the degree to which a font has been stretched compared to a font's normal aspect ratio. The enumerated values correspond to the usWidthClass definition in the OpenType specification. The usWidthClass represents an integer value between 1 and 9?lower values indicate narrower widths; higher values indicate wider widths.
-A font stretch describes the degree to which a font form is stretched from its normal aspect ratio, which is the original width to height ratio specified for the glyphs in the font. - The following illustration shows an example of Normal and Condensed stretches for the Rockwell Bold typeface.
Note??Values other than the ones defined in the enumeration are considered to be invalid, and are rejected by font API functions.? -Predefined font stretch : Not known (0).
Predefined font stretch : Ultra-condensed (1).
Predefined font stretch : Extra-condensed (2).
Specifies the type of DirectWrite factory object.
-A DirectWrite factory object contains information about its internal state, such as font loader registration and cached font data. In most cases you should use the shared factory object, because it allows multiple components that use DirectWrite to share internal DirectWrite state information, thereby reducing memory usage. However, there are cases when it is desirable to reduce the impact of a component on the rest of the process, such as a plug-in from an untrusted source, by sandboxing and isolating it from the rest of the process components. In such cases, you should use an isolated factory for the sandboxed component.
-Indicates that the DirectWrite factory is a shared factory and that it allows for the reuse of cached font data across multiple in-process components. Such factories also take advantage of cross process font caching components for better performance.
Indicates that the DirectWrite factory object is isolated. Objects created from the isolated factory do not interact with internal DirectWrite state from other components.
Indicates the direction of how lines of text are placed relative to one another.
-Specifies that text lines are placed from top to bottom.
Specifies that text lines are placed from bottom to top.
Specifies that text lines are placed from left to right.
Specifies that text lines are placed from right to left.
Indicates the file format of a complete font face.
-Font formats that consist of multiple files, such as Type 1 .PFM and .PFB, have a single enum entry.
-OpenType font face with CFF outlines.
OpenType font face with TrueType outlines.
OpenType font face with TrueType outlines.
A Type 1 font face.
A vector .FON format font face.
A bitmap .FON format font face.
Font face type is not recognized by the DirectWrite font system.
The font data includes only the CFF table from an OpenType CFF font. This font face type can be used only for embedded fonts (i.e., custom font file loaders) and the resulting font face object supports only the minimum functionality necessary to render glyphs.
OpenType font face that is a part of a TrueType collection.
A value that indicates the typographic feature of text supplied by the font.
-Replaces figures separated by a slash with an alternative form.
Equivalent OpenType tag: 'afrc'
Turns capital characters into petite capitals. It is generally used for words which would otherwise be set in all caps, such as acronyms, but which are desired in petite-cap form to avoid disrupting the flow of text. See the pcap feature description for notes on the relationship of caps, smallcaps and petite caps.
Equivalent OpenType tag: 'c2pc'
Turns capital characters into small capitals. It is generally used for words which would otherwise be set in all caps, such as acronyms, but which are desired in small-cap form to avoid disrupting the flow of text.
Equivalent OpenType tag: 'c2sc'
In specified situations, replaces default glyphs with alternate forms which provide better joining behavior. Used in script typefaces which are designed to have some or all of their glyphs join.
Equivalent OpenType tag: 'calt'
Shifts various punctuation marks up to a position that works better with all-capital sequences or sets of lining figures; also changes oldstyle figures to lining figures. By default, glyphs in a text face are designed to work with lowercase characters. Some characters should be shifted vertically to fit the higher visual center of all-capital or lining text. Also, lining figures are the same height (or close to it) as capitals, and fit much better with all-capital text.
Equivalent OpenType tag: 'case'
To minimize the number of glyph alternates, it is sometimes desired to decompose a character into two glyphs. Additionally, it may be preferable to compose two characters into a single glyph for better glyph processing. This feature permits such composition/decomposition. The feature should be processed as the first feature processed, and should be processed only when it is called.
Equivalent OpenType tag: 'ccmp'
Replaces a sequence of glyphs with a single glyph which is preferred for typographic purposes. Unlike other ligature features, clig specifies the context in which the ligature is recommended. This capability is important in some script designs and for swash ligatures.
Equivalent OpenType tag: 'clig'
Globally adjusts inter-glyph spacing for all-capital text. Most typefaces contain capitals and lowercase characters, and the capitals are positioned to work with the lowercase. When capitals are used for words, they need more space between them for legibility and esthetics. This feature would not apply to monospaced designs. Of course the user may want to override this behavior in order to do more pronounced letterspacing for esthetic reasons.
Equivalent OpenType tag: 'cpsp'
Replaces default character glyphs with corresponding swash glyphs in a specified context. Note that there may be more than one swash alternate for a given character.
Equivalent OpenType tag: 'cswh'
In cursive scripts like Arabic, this feature cursively positions adjacent glyphs.
Equivalent OpenType tag: 'curs'
Globally adjusts inter-glyph spacing for all-capital text. Most typefaces contain capitals and lowercase characters, and the capitals are positioned to work with the lowercase. When capitals are used for words, they need more space between them for legibility and esthetics. This feature would not apply to monospaced designs. Of course the user may want to override this behavior in order to do more pronounced letterspacing for esthetic reasons.
Equivalent OpenType tag: 'cpsp'
Replaces a sequence of glyphs with a single glyph which is preferred for typographic purposes. This feature covers those ligatures which may be used for special effect, at the user's preference.
Equivalent OpenType tag: 'dlig'
Replaces standard forms in Japanese fonts with corresponding forms preferred by typographers. For example, a user would invoke this feature to replace kanji character U+5516 with U+555E. -
Equivalent OpenType tag: 'expt'
Replaces figures separated by a slash with 'common' (diagonal) fractions.
Equivalent OpenType tag: 'frac'
Replaces glyphs set on other widths with glyphs set on full (usually em) widths. In a CJKV font, this may include "lower ASCII" Latin characters and various symbols. In a European font, this feature replaces proportionally-spaced glyphs with monospaced glyphs, which are generally set on widths of 0.6 em. For example, a user may invoke this feature in a Japanese font to get full monospaced Latin glyphs instead of the corresponding proportionally-spaced versions.
Equivalent OpenType tag: 'fwid'
Produces the half forms of consonants in Indic scripts. For example, in Hindi (Devanagari script), the conjunct KKa, obtained by doubling the Ka, is denoted with a half form of Ka followed by the full form.
Equivalent OpenType tag: 'half'
Produces the halant forms of consonants in Indic scripts. For example, in Sanskrit (Devanagari script), syllable final consonants are frequently required in their halant form.
Equivalent OpenType tag: 'haln'
Respaces glyphs designed to be set on full-em widths, fitting them onto half-em widths. This differs from hwid in that it does not substitute new glyphs.
Equivalent OpenType tag: 'halt'
Replaces the default (current) forms with the historical alternates. While some ligatures are also used for historical effect, this feature deals only with single characters. Some fonts include the historical forms as alternates, so they can be used for a 'period' effect.
Equivalent OpenType tag: 'hist'
Replaces standard kana with forms that have been specially designed for only horizontal writing. This is a typographic optimization for improved fit and more even color.
Equivalent OpenType tag: 'hkna'
Replaces the default (current) forms with the historical alternates. Some ligatures were in common use in the past, but appear anachronistic today. Some fonts include the historical forms as alternates, so they can be used for a 'period' effect.
Equivalent OpenType tag: 'hlig'
Replaces glyphs on proportional widths, or fixed widths other than half an em, with glyphs on half-em (en) widths. Many CJKV fonts have glyphs which are set on multiple widths; this feature selects the half-em version. There are various contexts in which this is the preferred behavior, including compatibility with older desktop documents.
Equivalent OpenType tag: 'hwid'
Used to access the JIS X 0212-1990 glyphs for the cases when the JIS X 0213:2004 form is encoded. The JIS X 0212-1990 (aka, "Hojo Kanji") and JIS X 0213:2004 character sets overlap significantly. In some cases their prototypical glyphs differ. When building fonts that support both JIS X 0212-1990 and JIS X 0213:2004 (such as those supporting the Adobe-Japan 1-6 character collection), it is recommended that JIS X 0213:2004 forms be the preferred encoded form.
Equivalent OpenType tag: 'hojo'
The National Language Council (NLC) of Japan has defined new glyph shapes for a number of JIS characters, which were incorporated into JIS X 0213:2004 as new prototypical forms. The 'jp04' feature is A subset of the 'nlck' feature, and is used to access these prototypical glyphs in a manner that maintains the integrity of JIS X 0213:2004.
Equivalent OpenType tag: 'jp04'
Replaces default (JIS90) Japanese glyphs with the corresponding forms from the JIS C 6226-1978 (JIS78) specification.
Equivalent OpenType tag: 'jp78'
Replaces default (JIS90) Japanese glyphs with the corresponding forms from the JIS X 0208-1983 (JIS83) specification.
Equivalent OpenType tag: 'jp83'
Replaces Japanese glyphs from the JIS78 or JIS83 specifications with the corresponding forms from the JIS X 0208-1990 (JIS90) specification.
Equivalent OpenType tag: 'jp90'
Adjusts amount of space between glyphs, generally to provide optically consistent spacing between glyphs. Although a well-designed typeface has consistent inter-glyph spacing overall, some glyph combinations require adjustment for improved legibility. Besides standard adjustment in the horizontal direction, this feature can supply size-dependent kerning data via device tables, "cross-stream" kerning in the Y text direction, and adjustment of glyph placement independent of the advance adjustment. Note that this feature may apply to runs of more than two glyphs, and would not be used in monospaced fonts. Also note that this feature does not apply to text set vertically.
Equivalent OpenType tag: 'kern'
Replaces a sequence of glyphs with a single glyph which is preferred for typographic purposes. This feature covers the ligatures which the designer/manufacturer judges should be used in normal conditions.
Equivalent OpenType tag: 'liga'
Changes selected figures from oldstyle to the default lining form. For example, a user may invoke this feature in order to get lining figures, which fit better with all-capital text. This feature overrides results of the Oldstyle Figures feature (onum).
Equivalent OpenType tag: 'lnum'
Enables localized forms of glyphs to be substituted for default forms. Many scripts used to write multiple languages over wide geographical areas have developed localized variant forms of specific letters, which are used by individual literary communities. For example, a number of letters in the Bulgarian and Serbian alphabets have forms distinct from their Russian counterparts and from each other. In some cases the localized form differs only subtly from the script 'norm', in others the forms are radically distinct.
Equivalent OpenType tag: 'locl'
Positions mark glyphs with respect to base glyphs. For example, in Arabic script positioning the Hamza above the Yeh.
Equivalent OpenType tag: 'mark'
Replaces standard typographic forms of Greek glyphs with corresponding forms commonly used in mathematical notation (which are a subset of the Greek alphabet).
Equivalent OpenType tag: 'mgrk'
Positions marks with respect to other marks. Required in various non-Latin scripts like Arabic. For example, in Arabic, the ligaturised mark Ha with Hamza above it can also be obtained by positioning these marks relative to one another.
Equivalent OpenType tag: 'mkmk'
Replaces default glyphs with various notational forms (such as glyphs placed in open or solid circles, squares, parentheses, diamonds or rounded boxes). In some cases an annotation form may already be present, but the user may want a different one.
Equivalent OpenType tag: 'nalt'
Used to access glyphs made from glyph shapes defined by the National Language Council (NLC) of Japan for a number of JIS characters in 2000.
Equivalent OpenType tag: 'nlck'
Changes selected figures from the default lining style to oldstyle form. For example, a user may invoke this feature to get oldstyle figures, which fit better into the flow of normal upper- and lowercase text. This feature overrides results of the Lining Figures feature (lnum).
Equivalent OpenType tag: 'onum'
Replaces default alphabetic glyphs with the corresponding ordinal forms for use after figures. One exception to the follows-a-figure rule is the numero character (U+2116), which is actually a ligature substitution, but is best accessed through this feature.
Equivalent OpenType tag: 'ordn'
Respaces glyphs designed to be set on full-em widths, fitting them onto individual (more or less proportional) horizontal widths. This differs from pwid in that it does not substitute new glyphs (GPOS, not GSUB feature). The user may prefer the monospaced form, or may simply want to ensure that the glyph is well-fit and not rotated in vertical setting (Latin forms designed for proportional spacing would be rotated).
Equivalent OpenType tag: 'palt'
Turns lowercase characters into petite capitals. Forms related to petite capitals, such as specially designed figures, may be included. Some fonts contain an additional size of capital letters, shorter than the regular smallcaps and it is referred to as petite caps. Such forms are most likely to be found in designs with a small lowercase x-height, where they better harmonise with lowercase text than the taller smallcaps (for examples of petite caps, see the Emigre type families Mrs Eaves and Filosofia).
Equivalent OpenType tag: 'pcap'
Replaces figure glyphs set on uniform (tabular) widths with corresponding glyphs set on glyph-specific (proportional) widths. Tabular widths will generally be the default, but this cannot be safely assumed. Of course this feature would not be present in monospaced designs.
Equivalent OpenType tag: 'pnum'
Replaces glyphs set on uniform widths (typically full or half-em) with proportionally spaced glyphs. The proportional variants are often used for the Latin characters in CJKV fonts, but may also be used for Kana in Japanese fonts.
Equivalent OpenType tag: 'pwid'
Replaces glyphs on other widths with glyphs set on widths of one quarter of an em (half an en). The characters involved are normally figures and some forms of punctuation.
Equivalent OpenType tag: 'qwid'
Replaces a sequence of glyphs with a single glyph which is preferred for typographic purposes. This feature covers those ligatures, which the script determines as required to be used in normal conditions. This feature is important for some scripts to ensure correct glyph formation.
Equivalent OpenType tag: 'rlig'
Identifies glyphs in the font which have been designed for "ruby", from the old typesetting term for four-point-sized type. Japanese typesetting often uses smaller kana glyphs, generally in superscripted form, to clarify the meaning of kanji which may be unfamiliar to the reader.
Equivalent OpenType tag: 'ruby'
Replaces the default forms with the stylistic alternates. Many fonts contain alternate glyph designs for a purely esthetic effect; these don't always fit into a clear category like swash or historical. As in the case of swash glyphs, there may be more than one alternate form.
Equivalent OpenType tag: 'salt'
Replaces lining or oldstyle figures with inferior figures (smaller glyphs which sit lower than the standard baseline, primarily for chemical or mathematical notation). May also replace lowercase characters with alphabetic inferiors.
Equivalent OpenType tag: 'sinf'
Turns lowercase characters into small capitals. This corresponds to the common SC font layout. It is generally used for display lines set in Large & small caps, such as titles. Forms related to small capitals, such as oldstyle figures, may be included.
Equivalent OpenType tag: 'smcp'
Replaces 'traditional' Chinese or Japanese forms with the corresponding 'simplified' forms.
Equivalent OpenType tag: 'smpl'
In addition to, or instead of, stylistic alternatives of individual glyphs (see 'salt' feature), some fonts may contain sets of stylistic variant glyphs corresponding to portions of the character set, such as multiple variants for lowercase letters in a Latin font. Glyphs in stylistic sets may be designed to harmonise visually, interract in particular ways, or otherwise work together. Examples of fonts including stylistic sets are Zapfino Linotype and Adobe's Poetica. Individual features numbered sequentially with the tag name convention 'ss01' 'ss02' 'ss03' . 'ss20' provide a mechanism for glyphs in these sets to be associated via GSUB lookup indexes to default forms and to each other, and for users to select from available stylistic sets
Equivalent OpenType tag: 'ss01'
See the description for
Equivalent OpenType tag: 'ss02'
See the description for
Equivalent OpenType tag: 'ss03'
See the description for
Equivalent OpenType tag: 'ss04'
See the description for
Equivalent OpenType tag: 'ss05'
See the description for
Equivalent OpenType tag: 'ss06'
See the description for
Equivalent OpenType tag: 'ss07'
See the description for
Equivalent OpenType tag: 'ss08'
See the description for
Equivalent OpenType tag: 'ss09'
See the description for
Equivalent OpenType tag: 'ss10'
See the description for
Equivalent OpenType tag: 'ss11'
See the description for
Equivalent OpenType tag: 'ss12'
See the description for
Equivalent OpenType tag: 'ss13'
See the description for
Equivalent OpenType tag: 'ss14'
See the description for
Equivalent OpenType tag: 'ss15'
See the description for
Equivalent OpenType tag: 'ss16'
See the description for
Equivalent OpenType tag: 'ss17'
See the description for
Equivalent OpenType tag: 'ss18'
See the description for
Equivalent OpenType tag: 'ss19'
See the description for
Equivalent OpenType tag: 'ss20'
May replace a default glyph with a subscript glyph, or it may combine a glyph substitution with positioning adjustments for proper placement.
Equivalent OpenType tag: 'subs'
Replaces lining or oldstyle figures with superior figures (primarily for footnote indication), and replaces lowercase letters with superior letters (primarily for abbreviated French titles).
Equivalent OpenType tag: 'sups'
Replaces default character glyphs with corresponding swash glyphs. Note that there may be more than one swash alternate for a given character.
Equivalent OpenType tag: 'swsh'
Replaces the default glyphs with corresponding forms designed specifically for titling. These may be all-capital and/or larger on the body, and adjusted for viewing at larger sizes.
Equivalent OpenType tag: 'titl'
Replaces 'simplified' Japanese kanji forms with the corresponding 'traditional' forms. This is equivalent to the Traditional Forms feature, but explicitly limited to the traditional forms considered proper for use in personal names (as many as 205 glyphs in some fonts).
Equivalent OpenType tag: 'tnam'
Replaces figure glyphs set on proportional widths with corresponding glyphs set on uniform (tabular) widths. Tabular widths will generally be the default, but this cannot be safely assumed. Of course this feature would not be present in monospaced designs.
Equivalent OpenType tag: 'tnum'
Replaces 'simplified' Chinese hanzi or Japanese kanji forms with the corresponding 'traditional' forms.
Equivalent OpenType tag: 'trad'
Replaces glyphs on other widths with glyphs set on widths of one third of an em. The characters involved are normally figures and some forms of punctuation.
Equivalent OpenType tag: 'twid'
Maps upper- and lowercase letters to a mixed set of lowercase and small capital forms, resulting in a single case alphabet (for an example of unicase, see the Emigre type family Filosofia). The letters substituted may vary from font to font, as appropriate to the design. If aligning to the x-height, smallcap glyphs may be substituted, or specially designed unicase forms might be used. Substitutions might also include specially designed figures. -
Equivalent OpenType tag: 'unic'
Indicates that the font is displayed vertically.
Replaces normal figures with figures adjusted for vertical display.
Allows the user to change from the default 0 to a slashed form. Some fonts contain both a default form of zero, and an alternative form which uses a diagonal slash through the counter. Especially in condensed designs, it can be difficult to distinguish between 0 and O (zero and capital O) in any situation where capitals and lining figures may be arbitrarily mixed.
Equivalent OpenType tag: 'zero'
The type of a font represented by a single font file. Font formats that consist of multiple files, for example Type 1 .PFM and .PFB, have separate enum values for each of the file types.
-Font type is not recognized by the DirectWrite font system.
OpenType font with CFF outlines.
OpenType font with TrueType outlines.
OpenType font that contains a TrueType collection.
Type 1 PFM font.
Type 1 PFB font.
Vector .FON font.
Bitmap .FON font.
OpenType font that contains a TrueType collection.
Specify whether
Identifies a string in a font.
-Unspecified font property identifier.
Family name for the weight-width-slope model.
Family name preferred by the designer. This enables font designers to group more than four fonts in a single family without losing compatibility with GDI. This name is typically only present if it differs from the GDI-compatible family name.
Face name of the font, for example Regular or Bold.
The full name of the font, for example "Arial Bold", from name id 4 in the name table.
GDI-compatible family name. Because GDI allows a maximum of four fonts per family, fonts in the same family may have different GDI-compatible family names, for example "Arial", "Arial Narrow", "Arial Black".
The postscript name of the font, for example "GillSans-Bold", from name id 6 in the name table.
Script/language tag to identify the scripts or languages that the font was primarily designed to support.
Script/language tag to identify the scripts or languages that the font declares it is able to support.
Semantic tag to describe the font, for example Fancy, Decorative, Handmade, Sans-serif, Swiss, Pixel, Futuristic.
Weight of the font represented as a decimal string in the range 1-999.
Stretch of the font represented as a decimal string in the range 1-9.
Style of the font represented as a decimal string in the range 0-2.
Total number of properties.
Specifies algorithmic style simulations to be applied to the font face. Bold and oblique simulations can be combined via bitwise OR operation.
-Style simulations are not recommended for good typographic quality.
-Indicates that no simulations are applied to the font face.
Indicates that algorithmic emboldening is applied to the font face.
Indicates that algorithmic italicization is applied to the font face.
Represents the degree to which a font has been stretched compared to a font's normal aspect ratio. The enumerated values correspond to the usWidthClass definition in the OpenType specification. The usWidthClass represents an integer value between 1 and 9?lower values indicate narrower widths; higher values indicate wider widths.
-A font stretch describes the degree to which a font form is stretched from its normal aspect ratio, which is the original width to height ratio specified for the glyphs in the font. - The following illustration shows an example of Normal and Condensed stretches for the Rockwell Bold typeface.
Note??Values other than the ones defined in the enumeration are considered to be invalid, and are rejected by font API functions.? -Predefined font stretch : Not known (0).
Predefined font stretch : Ultra-condensed (1).
Predefined font stretch : Extra-condensed (2).
Predefined font stretch : Condensed (3).
Predefined font stretch : Semi-condensed (4).
Predefined font stretch : Normal (5).
Predefined font stretch : Medium (5).
Predefined font stretch : Semi-expanded (6).
Predefined font stretch : Expanded (7).
Predefined font stretch : Extra-expanded (8).
Predefined font stretch : Ultra-expanded (9).
Represents the style of a font face as normal, italic, or oblique.
-Three terms categorize the slant of a font: normal, italic, and oblique.
Font style | Description |
---|---|
Normal | The characters in a normal, or roman, font are upright. - |
Italic - | The characters in an italic font are truly slanted and appear as they were designed. - |
Oblique | The characters in an oblique font are artificially slanted. |
?
For Oblique, the slant is achieved by performing a shear transformation on the characters from a normal font. When a true italic font is not available on a computer or printer, an oblique style can be generated from the normal font and used to simulate an italic font. The following illustration shows the normal, italic, and oblique font styles for the Palatino Linotype font. Notice how the italic font style has a more flowing and visually appealing appearance than the oblique font style, which is simply created by skewing the normal font style version of the text.
Note?? Values other than the ones defined in the enumeration are considered to be invalid, and they are rejected by font API functions.? -Font style : Normal.
Font style : Oblique.
Font style : Italic.
Represents the density of a typeface, in terms of the lightness or heaviness of the strokes. The enumerated values correspond to the usWeightClass definition in the OpenType specification. The usWeightClass represents an integer value between 1 and 999. Lower values indicate lighter weights; higher values indicate heavier weights.
-Weight differences are generally differentiated by an increased stroke or thickness that is associated with a given character in a typeface, as compared to a "normal" character from that same typeface. - The following illustration shows an example of Normal and UltraBold weights for the Palatino Linotype typeface.
Note??Not all weights are available for all typefaces. When a weight is not available for a typeface, the closest matching weight is returned.?Font weight values less than 1 or greater than 999 are considered invalid, and they are rejected by font API functions.
-Predefined font weight : Thin (100).
Predefined font weight : Extra-light (200).
Predefined font weight : Ultra-light (200).
Predefined font weight : Light (300).
Predefined font weight : Semi-Light (350).
Predefined font weight : Normal (400).
Predefined font weight : Regular (400).
Predefined font weight : Medium (500).
Predefined font weight : Demi-bold (600).
Predefined font weight : Semi-bold (600).
Predefined font weight : Bold (700).
Predefined font weight : Extra-bold (800).
Predefined font weight : Ultra-bold (800).
Predefined font weight : Black (900).
Predefined font weight : Heavy (900).
Predefined font weight : Extra-black (950).
Predefined font weight : Ultra-black (950).
The
The text analyzer outputs
Glyph orientation is upright.
Glyph orientation is rotated 90 degrees clockwise.
Glyph orientation is upside-down.
Glyph orientation is rotated 270 degrees clockwise.
Specifies whether to enable grid-fitting of glyph outlines (also known as hinting).
-Choose grid fitting based on the font's table information.
Always disable grid fitting, using the ideal glyph outlines.
Enable grid fitting, adjusting glyph outlines for device pixel display.
The informational string enumeration which identifies a string embedded in a font file.
-Indicates the string containing the unspecified name ID.
Indicates the string containing the copyright notice provided by the font.
Indicates the string containing a version number.
Indicates the string containing the trademark information provided by the font.
Indicates the string containing the name of the font manufacturer.
Indicates the string containing the name of the font designer.
Indicates the string containing the URL of the font designer (with protocol, e.g., http://, ftp://).
Indicates the string containing the description of the font. This may also contain revision information, usage recommendations, history, features, and so on.
Indicates the string containing the URL of the font vendor (with protocol, e.g., http://, ftp://). If a unique serial number is embedded in the URL, it can be used to register the font.
Indicates the string containing the description of how the font may be legally used, or different example scenarios for licensed use.
Indicates the string containing the URL where additional licensing information can be found.
Indicates the string containing the GDI-compatible family name. Since GDI allows a maximum of four fonts per family, fonts in the same family may have different GDI-compatible family names (e.g., "Arial", "Arial Narrow", "Arial Black").
Indicates the string containing a GDI-compatible subfamily name.
Indicates the string containing the family name preferred by the designer. This enables font designers to group more than four fonts in a single family without losing compatibility with GDI. This name is typically only present if it differs from the GDI-compatible family name.
Indicates the string containing the subfamily name preferred by the designer. This name is typically only present if it differs from the GDI-compatible subfamily name.
Contains sample text for display in font lists. This can be the font name or any other text that the designer thinks is the best example to display the font in.
The full name of the font, like Arial Bold, from name id 4 in the name table
The postscript name of the font, like GillSans-Bold, from name id 6 in the name table.
The postscript CID findfont name, from name id 20 in the name table
The method used for line spacing in a text layout.
-The line spacing method is set by using the SetLineSpacing method of the
Line spacing depends solely on the content, adjusting to accommodate the size of fonts and inline objects.
Lines are explicitly set to uniform spacing, regardless of the size of fonts and inline objects. This can be useful to avoid the uneven appearance that can occur from font fallback.
Line spacing and baseline distances are proportional to the computed values based on the content, the size of the fonts and inline objects.
Note??This value is only available on Windows?10 or later and it can be used withSpecifies the location of a resource.
-The resource is remote, and information about it is unknown, including the file size and date. If you attempt to create a font or file stream, the creation will fail until locality becomes at least partial.
The resource is partially local, which means you can query the size and date of the file stream. With this type, you also might be able to create a font face and retrieve the particular glyphs for metrics and drawing, but not all the glyphs will be present.
The resource is completely local, and all font functions can be called without concern of missing data or errors related to network connectivity.
Specifies how to apply number substitution on digits and related punctuation.
-Specifies that the substitution method should be determined based on the LOCALE_IDIGITSUBSTITUTION value of the specified text culture.
If the culture is Arabic or Persian, specifies that the number shapes depend on the context. Either traditional or nominal number shapes are used, depending on the nearest preceding strong character or (if there is none) the reading direction of the paragraph.
Specifies that code points 0x30-0x39 are always rendered as nominal numeral shapes (ones of the European number), that is, no substitution is performed.
Specifies that numbers are rendered using the national number shapes as specified by the LOCALE_SNATIVEDIGITS value of the specified text culture.
Specifies that numbers are rendered using the traditional shapes for the specified culture. For most cultures, this is the same as NativeNational. However, NativeNational results in Latin numbers for some Arabic cultures, whereasDWRITE_NUMBER_SUBSTITUTION_METHOD_TRADITIONAL results in arabic numbers for all Arabic cultures.
The optical margin alignment mode.
By default, glyphs are aligned to the margin by the default origin and side-bearings of the glyph. If you specify
Align to the default origin and side-bearings of the glyph.
Align to the ink of the glyphs, such that the black box abuts the margins.
The
Glyphs are rendered in outline mode by default at large sizes for performance reasons, but how large (that is, the outline threshold) depends on the quality of outline rendering. If the graphics system renders anti-aliased outlines, a relatively low threshold is used. But if the graphics system renders aliased outlines, a much higher threshold is used.
-The
Any arm style.
No fit arm style.
The arm style is straight horizontal.
The arm style is straight wedge.
The arm style is straight vertical.
The arm style is straight single serif.
The arm style is straight double serif.
The arm style is non-straight horizontal.
The arm style is non-straight wedge.
The arm style is non-straight vertical.
The arm style is non-straight single serif.
The arm style is non-straight double serif.
The arm style is straight horizontal.
The arm style is straight vertical.
The arm style is non-straight horizontal.
The arm style is non-straight wedge.
The arm style is non-straight vertical.
The arm style is non-straight single serif.
The arm style is non-straight double serif.
The
Any aspect.
No fit for aspect.
Super condensed aspect.
Very condensed aspect.
Condensed aspect.
Normal aspect.
Extended aspect.
Very extended aspect.
Super extended aspect.
Monospace aspect.
The
Any aspect ratio.
No fit for aspect ratio.
Very condensed aspect ratio.
Condensed aspect ratio.
Normal aspect ratio.
Expanded aspect ratio.
Very expanded aspect ratio.
The
Any range.
No fit for range.
The range includes extended collection.
The range includes literals.
The range doesn't include lower case.
The range includes small capitals.
The
Any contrast.
No fit contrast.
No contrast.
Very low contrast.
Low contrast.
Medium low contrast.
Medium contrast.
Medium high contrast.
High contrast.
Very high contrast.
Horizontal low contrast.
Horizontal medium contrast.
Horizontal high contrast.
Broken contrast.
The
Any class of decorative typeface.
No fit for decorative typeface.
Derivative decorative typeface.
Nonstandard topology decorative typeface.
Nonstandard elements decorative typeface.
Nonstandard aspect decorative typeface.
Initials decorative typeface.
Cartoon decorative typeface.
Picture stems decorative typeface.
Ornamented decorative typeface.
Text and background decorative typeface.
Collage decorative typeface.
Montage decorative typeface.
The
Any decorative topology.
No fit for decorative topology.
Standard decorative topology.
Square decorative topology.
Multiple segment decorative topology.
Art deco decorative topology.
Uneven weighting decorative topology.
Diverse arms decorative topology.
Diverse forms decorative topology.
Lombardic forms decorative topology.
Upper case in lower case decorative topology.
The decorative topology is implied.
Horseshoe E and A decorative topology.
Cursive decorative topology.
Blackletter decorative topology.
Swash variance decorative topology.
The
Any typeface classification.
No fit typeface classification.
Text display typeface classification.
Script (or hand written) typeface classification.
Decorative typeface classification.
Symbol typeface classification.
Pictorial (or symbol) typeface classification.
The
Any fill.
No fit for fill.
The fill is the standard solid fill.
No fill.
The fill is patterned fill.
The fill is complex fill.
The fill is shaped fill.
The fill is drawn distressed.
The
Any finials.
No fit for finials.
No loops.
No closed loops.
No open loops.
Sharp with no loops.
Sharp with closed loops.
Sharp with open loops.
Tapered with no loops.
Tapered with closed loops.
Tapered with open loops.
Round with no loops.
Round with closed loops.
Round with open loops.
The
Any letterform.
No fit letterform.
Normal contact letterform.
Normal weighted letterform.
Normal boxed letterform.
Normal flattened letterform.
Normal rounded letterform.
Normal off-center letterform.
Normal square letterform.
Oblique contact letterform.
Oblique weighted letterform.
Oblique boxed letterform.
Oblique flattened letterform.
Oblique rounded letterform.
Oblique off-center letterform.
Oblique square letterform.
The
Any lining.
No fit for lining.
No lining.
The lining is inline.
The lining is outline.
The lining is engraved.
The lining is shadowed.
The lining is relief.
The lining is backdrop.
The
Any midline.
No fit midline.
Standard trimmed midline.
Standard pointed midline.
Standard serifed midline.
High trimmed midline.
High pointed midline.
High serifed midline.
Constant trimmed midline.
Constant pointed midline.
Constant serifed midline.
Low trimmed midline.
Low pointed midline.
Low serifed midline.
The
Any proportion for the text.
No fit proportion for the text.
Old style proportion for the text.
Modern proportion for the text.
Extra width proportion for the text.
Expanded proportion for the text.
Condensed proportion for the text.
Very expanded proportion for the text.
Very condensed proportion for the text.
Monospaced proportion for the text.
The
Any script form.
No fit for script form.
Script form is upright with no wrapping.
Script form is upright with some wrapping.
Script form is upright with more wrapping.
Script form is upright with extreme wrapping.
Script form is oblique with no wrapping.
Script form is oblique with some wrapping.
Script form is oblique with more wrapping.
Script form is oblique with extreme wrapping.
Script form is exaggerated with no wrapping.
Script form is exaggerated with some wrapping.
Script form is exaggerated with more wrapping.
Script form is exaggerated with extreme wrapping.
The
Any script topology.
No fit for script topology.
Script topology is roman disconnected.
Script topology is roman trailing.
Script topology is roman connected.
Script topology is cursive disconnected.
Script topology is cursive trailing.
Script topology is cursive connected.
Script topology is black-letter disconnected.
Script topology is black-letter trailing.
Script topology is black-letter connected.
The
Any appearance of the serif text.
No fit appearance of the serif text.
Cove appearance of the serif text.
Obtuse cove appearance of the serif text.
Square cove appearance of the serif text.
Obtuse square cove appearance of the serif text.
Square appearance of the serif text.
Thin appearance of the serif text.
Oval appearance of the serif text.
Exaggerated appearance of the serif text.
Triangle appearance of the serif text.
Normal sans appearance of the serif text.
Obtuse sans appearance of the serif text.
Perpendicular sans appearance of the serif text.
Flared appearance of the serif text.
Rounded appearance of the serif text.
Script appearance of the serif text.
Perpendicular sans appearance of the serif text.
Oval appearance of the serif text.
The
Any spacing.
No fit for spacing.
Spacing is proportional.
Spacing is monospace.
The
Any stroke variation for text characters.
No fit stroke variation for text characters.
No stroke variation for text characters.
The stroke variation for text characters is gradual diagonal.
The stroke variation for text characters is gradual transitional.
The stroke variation for text characters is gradual vertical.
The stroke variation for text characters is gradual horizontal.
The stroke variation for text characters is rapid vertical.
The stroke variation for text characters is rapid horizontal.
The stroke variation for text characters is instant vertical.
The stroke variation for text characters is instant horizontal.
The
Any aspect ratio of symbolic characters.
No fit for aspect ratio of symbolic characters.
No width aspect ratio of symbolic characters.
Exceptionally wide symbolic characters.
Super wide symbolic characters.
Very wide symbolic characters.
Wide symbolic characters.
Normal aspect ratio of symbolic characters.
Narrow symbolic characters.
Very narrow symbolic characters.
The
Any kind of symbol set.
No fit for the kind of symbol set.
The kind of symbol set is montages.
The kind of symbol set is pictures.
The kind of symbol set is shapes.
The kind of symbol set is scientific symbols.
The kind of symbol set is music symbols.
The kind of symbol set is expert symbols.
The kind of symbol set is patterns.
The kind of symbol set is boarders.
The kind of symbol set is icons.
The kind of symbol set is logos.
The kind of symbol set is industry specific.
The
Any kind of tool.
No fit for the kind of tool.
Flat NIB tool.
Pressure point tool.
Engraved tool.
Ball tool.
Brush tool.
Rough tool.
Felt-pen-brush-tip tool.
Wild-brush tool.
The
The
Any weight.
No fit weight.
Very light weight.
Light weight.
Thin weight.
Book weight.
Medium weight.
Demi weight.
Bold weight.
Heavy weight.
Black weight.
Extra black weight.
Extra black weight.
The
Any xascent.
No fit for xascent.
Very low xascent.
Low xascent.
Medium xascent.
High xascent.
Very high xascent.
The
Any xheight.
No fit xheight.
Constant small xheight.
Constant standard xheight.
Constant large xheight.
Ducking small xheight.
Ducking standard xheight.
Ducking large xheight.
Constant standard xheight.
Ducking standard xheight.
Specifies the alignment of paragraph text along the flow direction axis, relative to the top and bottom of the flow's layout box.
-The top of the text flow is aligned to the top edge of the layout box.
The bottom of the text flow is aligned to the bottom edge of the layout box.
The center of the flow is aligned to the center of the layout box.
Represents the internal structure of a device pixel (that is, the physical arrangement of red, green, and blue color components) that is assumed for purposes of rendering text. -
-The red, green, and blue color components of each pixel are assumed to occupy the same point.
Each pixel is composed of three vertical stripes, with red on the left, green in the center, and blue on the right. This is the most common pixel geometry for LCD monitors.
Each pixel is composed of three vertical stripes, with blue on the left, green in the center, and red on the right.
Specifies the direction in which reading progresses.
Note??Indicates that reading progresses from left to right.
Indicates that reading progresses from right to left.
Indicates that reading progresses from top to bottom.
Indicates that reading progresses from bottom to top.
Represents a method of rendering glyphs.
Note?? This topic is aboutRepresents a method of rendering glyphs.
Note?? This topic is aboutIndicates additional shaping requirements for text.
-Indicates that there is no additional shaping requirements for text. Text is shaped with the writing system default behavior.
Indicates that text should leave no visible control or format control characters.
Specifies the alignment of paragraph text along the reading direction axis, relative to the leading and trailing edge of the layout box.
-The leading edge of the paragraph text is aligned to the leading edge of the layout box.
The trailing edge of the paragraph text is aligned to the trailing edge of the layout box.
The center of the paragraph text is aligned to the center of the layout box.
Align text to the leading side, and also justify text to fill the lines.
The
ClearType antialiasing computes coverage independently for the red, green, and blue color elements of each pixel. This allows for more detail than conventional antialiasing. However, because there is no one alpha value for each pixel, ClearType is not suitable for rendering text onto a transparent intermediate bitmap.
Grayscale antialiasing computes one coverage value for each pixel. Because the alpha value of each pixel is well-defined, text can be rendered onto a transparent bitmap, which can then be composited with other content.
Note??Grayscale rendering withIdentifies a type of alpha texture.
-An alpha texture is a bitmap of alpha values, each representing opacity of a pixel or subpixel.
-Specifies an alpha texture for aliased text rendering (that is, each pixel is either fully opaque or fully transparent), with one byte per pixel.
Specifies an alpha texture for ClearType text rendering, with three bytes per pixel in the horizontal dimension and one byte per pixel in the vertical dimension.
Specifies the text granularity used to trim text overflowing the layout box.
-No trimming occurs. Text flows beyond the layout width.
Trimming occurs at a character cluster boundary.
Trimming occurs at a word boundary.
The
The client specifies a
The default glyph orientation. In vertical layout, naturally horizontal scripts (Latin, Thai, Arabic, Devanagari) rotate 90 degrees clockwise, while ideographic scripts (Chinese, Japanese, Korean) remain upright, 0 degrees.
Stacked glyph orientation. Ideographic scripts and scripts that permit stacking (Latin, Hebrew) are stacked in vertical reading layout. Connected scripts (Arabic, Syriac, 'Phags-pa, Ogham), which would otherwise look broken if glyphs were kept at 0 degrees, remain connected and rotate.
Specifies the word wrapping to be used in a particular multiline paragraph.
Note??Indicates that words are broken across lines to avoid text overflowing the layout box.
Indicates that words are kept within the same line even when it overflows the layout box. This option is often used with scrolling to reveal overflow text.
Words are broken across lines to avoid text overflowing the layout box. Emergency wrapping occurs if the word is larger than the maximum width. -
When emergency wrapping, only wrap whole words, never breaking words when the layout width is too small for even a single word. -
Wrap between any valid character clusters.
Creates a DirectWrite factory object that is used for subsequent creation of individual DirectWrite objects.
-A value that specifies whether the factory object will be shared or isolated.
A
An address of a reference to the newly created DirectWrite factory object.
If this function succeeds, it returns
This function creates a DirectWrite factory object that is used for subsequent creation of individual DirectWrite objects. DirectWrite factory contains internal state data such as font loader registration and cached font data. In most cases it is recommended you use the shared factory object, because it allows multiple components that use DirectWrite to share internal DirectWrite state data, and thereby reduce memory usage. However, there are cases when it is desirable to reduce the impact of a component, such as a plug-in from an untrusted source, on the rest of the process, by sandboxing and isolating it from the rest of the process components. In such cases, it is recommended you use an isolated factory for the sandboxed component.
The following example shows how to create a shared DirectWrite factory.
if (SUCCEEDED(hr)) - { hr =-( , __uuidof( ), reinterpret_cast< **>(&pDWriteFactory_) ); - }
Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Encapsulates a 32-bit device independent bitmap and device context, which you can use for rendering glyphs.
-Gets or sets the current text antialiasing mode of the bitmap render target.
-Gets the current text antialiasing mode of the bitmap render target.
-Returns a
Sets the current text antialiasing mode of the bitmap render target.
-A
Returns
The antialiasing mode of a newly-created bitmap render target defaults to
This interface allows the application to enumerate through the color glyph runs. The enumerator enumerates the layers in a back to front order for appropriate layering.
-Returns the current glyph run of the enumerator.
-Move to the next glyph run in the enumerator.
-Returns TRUE if there is a next glyph run.
If this method succeeds, it returns
Returns the current glyph run of the enumerator.
-A reference to the current glyph run.
If this method succeeds, it returns
Enumerator for an ordered collection of color glyph runs.
-Gets the current color glyph run.
-Gets the current color glyph run.
-Receives a reference to the color glyph run. The reference remains valid until the next call to MoveNext or until the interface is released.
Standard
The root factory interface for all DirectWrite objects.
-Creates a font fallback object from the system font fallback list.
-Creates a font fallback object from the system font fallback list.
-Contains an address of a reference to the newly created font fallback object.
If this method succeeds, it returns
Creates a font fallback builder object.
A font fall back builder allows you to create Unicode font fallback mappings and create a font fall back object from those mappings.
-Contains an address of a reference to the newly created font fallback builder object.
If this method succeeds, it returns
This method is called on a glyph run to translate it in to multiple color glyph runs.
-The horizontal baseline origin of the original glyph run.
The vertical baseline origin of the original glyph run.
Original glyph run containing monochrome glyph IDs.
Optional glyph run description.
Measuring mode used to compute glyph positions if the run contains color glyphs.
World transform multiplied by any DPI scaling. This is needed to compute glyph positions if the run contains color glyphs and the measuring mode is not
Zero-based index of the color palette to use. Valid indices are less than the number of palettes in the font, as returned by
If the original glyph run contains color glyphs, this parameter receives a reference to an
If this method succeeds, it returns
If the code calls this method with a glyph run that contains no color information, the method returns DWRITE_E_NOCOLOR to let the application know that it can just draw the original glyph run. If the glyph run contains color information, the function returns an object that can be enumerated through to expose runs and associated colors. The application then calls DrawGlyphRun with each of the returned glyph runs and foreground colors.
-Creates a rendering parameters object with the specified properties.
-The gamma value used for gamma correction, which must be greater than zero and cannot exceed 256.
The amount of contrast enhancement, zero or greater.
The amount of contrast enhancement, zero or greater.
The degree of ClearType level, from 0.0f (no ClearType) to 1.0f (full ClearType).
The geometry of a device pixel.
Method of rendering glyphs. In most cases, this should be
How to grid fit glyph outlines. In most cases, this should be DWRITE_GRID_FIT_DEFAULT to automatically choose an appropriate mode.
Holds the newly created rendering parameters object, or
If this method succeeds, it returns
Creates a glyph run analysis object, which encapsulates information used to render a glyph run.
-Structure specifying the properties of the glyph run.
Optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified by the emSize and pixelsPerDip.
Specifies the rendering mode, which must be one of the raster rendering modes (i.e., not default and not outline).
Specifies the method to measure glyphs.
How to grid-fit glyph outlines. This must be non-default.
Specifies the antialias mode.
Horizontal position of the baseline origin, in DIPs.
Vertical position of the baseline origin, in DIPs.
Receives a reference to the newly created object.
If this method succeeds, it returns
Used to create all subsequent DirectWrite objects. This interface is the root factory interface for all DirectWrite objects.
- Create an
if (SUCCEEDED(hr)) - { hr =( , __uuidof( ), reinterpret_cast< **>(&pDWriteFactory_) ); - }
An
Retrieves the list of system fonts.
-Gets the font download queue associated with this factory object.
-Creates a glyph-run-analysis object that encapsulates info that DirectWrite uses to render a glyph run.
-If this method succeeds, it returns
Creates a rendering parameters object with the specified properties.
-The gamma value used for gamma correction, which must be greater than zero and cannot exceed 256.
The amount of contrast enhancement, zero or greater.
The amount of contrast enhancement to use for grayscale antialiasing, zero or greater.
The degree of ClearType level, from 0.0f (no ClearType) to 1.0f (full ClearType).
A
A
A
A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Creates a reference to a font given a full path.
-Absolute file path. Subsequent operations on the constructed object may fail if the user provided filePath doesn't correspond to a valid file on the disk.
The zero based index of a font face in cases when the font files contain a collection of font faces. If the font files contain a single face, this value should be zero.
Font face simulation flags for algorithmic emboldening and italicization.
Contains newly created font face reference object, or nullptr in case of failure.
If this method succeeds, it returns
Creates a reference to a font given a full path.
-Absolute file path. Subsequent operations on the constructed object may fail if the user provided filePath doesn't correspond to a valid file on the disk.
Last modified time of the input file path. If the parameter is omitted, the function will access the font file to obtain its last write time, so the clients are encouraged to specify this value to avoid extra disk access. Subsequent operations on the constructed object may fail if the user provided lastWriteTime doesn't match the file on the disk.
The zero based index of a font face in cases when the font files contain a collection of font faces. If the font files contain a single face, this value should be zero.
Font face simulation flags for algorithmic emboldening and italicization.
Contains newly created font face reference object, or nullptr in case of failure.
If this method succeeds, it returns
Retrieves the list of system fonts.
-Holds the newly created font set object, or
If this method succeeds, it returns
Creates an empty font set builder to add font face references and create a custom font set.
-Holds the newly created font set builder object, or
If this method succeeds, it returns
Create a weight/width/slope tree from a set of fonts.
-A set of fonts to use to build the collection.
Holds the newly created font collection object, or
If this method succeeds, it returns
Retrieves a weight/width/slope tree of system fonts.
-If this parameter is TRUE, the function performs an immediate check for changes to the set of system fonts. If this parameter is
Holds the newly created font collection object, or
If this parameter is TRUE, the function performs an immediate check for changes to the set of system fonts. If this parameter is
If this method succeeds, it returns
Gets the font download queue associated with this factory object.
-Receives a reference to the font download queue interface.
If this method succeeds, it returns
The root factory interface for all DirectWrite objects.
-Translates a glyph run to a sequence of color glyph runs, which can be rendered to produce a color representation of the original "base" run.
-Horizontal and vertical origin of the base glyph run in pre-transform coordinates.
Pointer to the original "base" glyph run.
Optional glyph run description.
Which data formats the runs should be split into.
Measuring mode, needed to compute the origins of each glyph.
Matrix converting from the client's coordinate space to device coordinates (pixels), i.e., the world transform multiplied by any DPI scaling.
Zero-based index of the color palette to use. Valid indices are less than the number of palettes in the font, as returned by
If the function succeeds, receives a reference to an enumerator object that can be used to obtain the color glyph runs. If the base run has no color glyphs, then the output reference is
Returns DWRITE_E_NOCOLOR if the font has no color information, the glyph run does not contain any color glyphs, or the specified color palette index is out of range. In this case, the client should render the original glyph run. Otherwise, returns a standard
Calling
Converts glyph run placements to glyph origins.
-Structure containing the properties of the glyph run.
The position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
On return contains the glyph origins for the glyphrun.
If this method succeeds, it returns
The transform and DPI have no effect on the origin scaling. They are solely used to compute glyph advances when not supplied and align glyphs in pixel aligned measuring modes.
-Converts glyph run placements to glyph origins.
-Structure containing the properties of the glyph run.
The measuring method for glyphs in the run, used with the other properties to determine the rendering mode.
The position of the baseline origin, in DIPs, relative to the upper-left corner of the DIB.
World transform multiplied by any DPI scaling. This is needed to compute glyph positions if the run contains color glyphs and the measuring mode is not
On return contains the glyph origins for the glyphrun.
If this method succeeds, it returns
The transform and DPI have no effect on the origin scaling. They are solely used to compute glyph advances when not supplied and align glyphs in pixel aligned measuring modes.
-Used to create all subsequent DirectWrite objects. This interface is the root factory interface for all DirectWrite objects.
- Create an
if (SUCCEEDED(hr)) - { hr =( , __uuidof( ), reinterpret_cast< **>(&pDWriteFactory_) ); - }
An
This topic describes various ways in which you can use custom fonts in your app.
This topic describes various ways in which you can use custom fonts in your app.
Represents a physical font in a font collection. This interface is used to create font faces from physical fonts, or to retrieve information such as font face metrics or face names from existing font faces.
-Gets the font family to which the specified font belongs.
-Gets the weight, or stroke thickness, of the specified font.
-Gets the stretch, or width, of the specified font.
-Gets the style, or slope, of the specified font.
-Determines whether the font is a symbol font.
-Gets a localized strings collection containing the face names for the font (such as Regular or Bold), indexed by locale name.
-Gets a value that indicates what simulations are applied to the specified font.
-Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-Gets the font family to which the specified font belongs.
-When this method returns, contains an address of a reference to the font family object to which the specified font belongs.
If this method succeeds, it returns
Gets the weight, or stroke thickness, of the specified font.
-A value that indicates the weight for the specified font.
Gets the stretch, or width, of the specified font.
-A value that indicates the type of stretch, or width, applied to the specified font.
Gets the style, or slope, of the specified font.
-A value that indicates the type of style, or slope, of the specified font.
Determines whether the font is a symbol font.
-TRUE if the font is a symbol font; otherwise,
Gets a localized strings collection containing the face names for the font (such as Regular or Bold), indexed by locale name.
-When this method returns, contains an address to a reference to the newly created localized strings object.
If this method succeeds, it returns
Gets a localized strings collection containing the specified informational strings, indexed by locale name.
-A value that identifies the informational string to get. For example,
When this method returns, contains an address of a reference to the newly created localized strings object.
When this method returns, TRUE if the font contains the specified string ID; otherwise,
If the font does not contain the string specified by informationalStringID, the return value is
Gets a value that indicates what simulations are applied to the specified font.
-A value that indicates one or more of the types of simulations (none, bold, or oblique) applied to the specified font.
Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-When this method returns, contains a structure that has font metrics for the current font face. The metrics returned by this function are in font design units.
Determines whether the font supports a specified character.
-A Unicode (UCS-4) character value for the method to inspect.
When this method returns, TRUE if the font supports the specified character; otherwise,
Creates a font face object for the font.
-When this method returns, contains an address of a reference to the newly created font face object.
If this method succeeds, it returns
Represents a physical font in a font collection.
-Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-Gets the PANOSE values from the font and is used for font selection and matching.
-If the font has no PANOSE values, they are set to 'any' (0) and DirectWrite doesn't simulate those values.
-Determines if the font is monospaced, that is, the characters are the same fixed-pitch width (non-proportional).
-Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
- A filled
Gets the PANOSE values from the font and is used for font selection and matching.
-A reference to the
If the font has no PANOSE values, they are set to 'any' (0) and DirectWrite doesn't simulate those values.
-Retrieves the list of character ranges supported by a font.
-The maximum number of character ranges passed in from the client.
An array of
A reference to the actual number of character ranges, regardless of the maximum count.
This method can return one of these values.
Return value | Description |
---|---|
| The method executed successfully. |
| The buffer is too small. The actualRangeCount was more than the maxRangeCount. |
?
The list of character ranges supported by a font, is useful for scenarios like character picking, glyph display, and efficient font selection lookup. GetUnicodeRanges is similar to GDI's GetFontUnicodeRanges, except that it returns the full Unicode range, not just 16-bit UCS-2.
These ranges are from the cmap, not the OS/2::ulCodePageRange1.
If this method is unavailable, you can use the
The
Determines if the font is monospaced, that is, the characters are the same fixed-pitch width (non-proportional).
-Returns true if the font is monospaced, else it returns false.
Represents a physical font in a font collection.
This interface adds the ability to check if a color rendering path is potentially necessary.
-Enables determining if a color rendering path is potentially necessary.
-Enables determining if a color rendering path is potentially necessary.
-Returns TRUE if the font has color information (COLR and CPAL tables); otherwise
Represents a font in a font collection.
-Gets a font face reference that identifies this font.
-Gets the current locality of the font.
-For fully local files, the result will always be
Creates a font face object for the font.
-A reference to a memory block that receives a reference to a
If this method succeeds, it returns
This method returns DWRITE_E_REMOTEFONT if it could not construct a remote font.
Compares two instances of font references for equality.
-A reference to a
Returns whether the two instances of font references are equal. Returns TRUE if the two instances are equal; otherwise,
Gets a font face reference that identifies this font.
-A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Gets the current locality of the font.
-Returns the current locality of the font.
For fully local files, the result will always be
An object that encapsulates a set of fonts, such as the set of fonts installed on the system, or the set of fonts in a particular directory. The font collection API can be used to discover what font families and fonts are available, and to obtain some metadata about the fonts.
-Gets the underlying font set used by this collection.
-Gets the underlying font set used by this collection.
-Returns the font set used by the collection.
If this method succeeds, it returns
Application-defined callback interface that receives notifications from the font download queue (
The DownloadCompleted method is called back on an arbitrary thread when a download operation ends.
-Pointer to the download queue interface on which the BeginDownload method was called.
Optional context object that was passed to BeginDownload. AddRef is called on the context object by BeginDownload and Release is called after the DownloadCompleted method returns.
Result of the download operation.
Determines whether the download queue is empty. Note that the queue does not include requests that are already being downloaded. Calling BeginDownload clears the queue.
-Gets the current generation number of the download queue, which is incremented every time after a download completes, whether failed or successful. This cookie value can be compared against cached data to determine if it is stale.
-Registers a client-defined listener object that receives download notifications. All registered listener's DownloadCompleted will be called after BeginDownload completes.
-If this method succeeds, it returns
An
Unregisters a notification handler that was previously registered using AddListener.
-If this method succeeds, it returns
Determines whether the download queue is empty. Note that the queue does not include requests that are already being downloaded. Calling BeginDownload clears the queue.
-TRUE if the queue is empty,
Begins an asynchronous download operation. The download operation executes in the background until it completes or is cancelled by a CancelDownload call.
- Returns
BeginDownload removes all download requests from the queue, transferring them to a background download operation. If any previous downloads are still ongoing when BeginDownload is called again, the new download does not complete until the previous downloads have finished. If the queue is empty and no active downloads are pending, the DownloadCompleted callback is called immediately with DWRITE_DOWNLOAD_RESULT_NONE.
-Removes all download requests from the queue and cancels any active download operations.
-If this method succeeds, it returns
Gets the current generation number of the download queue, which is incremented every time after a download completes, whether failed or successful. This cookie value can be compared against cached data to determine if it is stale.
-The current generation number of the download queue.
Represents an absolute reference to a font face.
This interface contains the font face type, appropriate file references, and face identification data.
You obtain various font data like metrics, names, and glyph outlines from the
Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-Gets caret metrics for the font in design units.
-Caret metrics are used by text editors for drawing the correct caret placement and slant.
-Determines whether the font of a text range is monospaced, that is, the font characters are the same fixed-pitch width.
-Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a font face and are used by applications for layout calculations.
-A filled
Obtains design units and common metrics for the font face. These metrics are applicable to all the glyphs within a fontface and are used by applications for layout calculations.
-The logical size of the font in DIP units.
The number of physical pixels per DIP.
An optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified by the font size and pixelsPerDip.
A reference to a
Standard
Gets caret metrics for the font in design units.
-A reference to the
Caret metrics are used by text editors for drawing the correct caret placement and slant.
-Retrieves a list of character ranges supported by a font.
-Maximum number of character ranges passed in from the client.
An array of
A reference to the actual number of character ranges, regardless of the maximum count.
This method can return one of these values.
Return value | Description |
---|---|
| The method executed successfully. |
| The buffer is too small. The actualRangeCount was more than the maxRangeCount. |
?
A list of character ranges supported by the font is useful for scenarios like character picking, glyph display, and efficient font selection lookup. This is similar to GDI's GetFontUnicodeRanges, except that it returns the full Unicode range, not just 16-bit UCS-2.
These ranges are from the cmap, not the OS/2::ulCodePageRange1.
If this method is unavailable, you can use the
The
Determines whether the font of a text range is monospaced, that is, the font characters are the same fixed-pitch width.
-Returns TRUE if the font is monospaced, otherwise it returns
Retrieves the advances in design units for a sequences of glyphs.
-The number of glyphs to retrieve advances for.
An array of glyph id's to retrieve advances for.
The returned advances in font design units for each glyph.
Retrieve the glyph's vertical advance height rather than horizontal advance widths.
If this method succeeds, it returns
This is equivalent to calling GetGlyphMetrics and using only the advance width and height.
-Returns the pixel-aligned advances for a sequences of glyphs.
-Logical size of the font in DIP units. A DIP ("device-independent pixel") equals 1/96 inch.
Number of physical pixels per DIP. For example, if the DPI of the rendering surface is 96 this value is 1.0f. If the DPI is 120, this value is 120.0f/96.
Optional transform applied to the glyphs and their positions. This transform is applied after the scaling specified by the font size and pixelsPerDip.
When
Retrieve the glyph's vertical advances rather than horizontal advances.
Total glyphs to retrieve adjustments for.
An array of glyph id's to retrieve advances.
The returned advances in font design units for each glyph.
If this method succeeds, it returns
This is equivalent to calling GetGdiCompatibleGlyphMetrics and using only the advance width and height.
Like GetGdiCompatibleGlyphMetrics, these are in design units, meaning they must be scaled down by DWRITE_FONT_METRICS::designUnitsPerEm.
-Retrieves the kerning pair adjustments from the font's kern table.
-Number of glyphs to retrieve adjustments for.
An array of glyph id's to retrieve adjustments for.
The advances, returned in font design units, for each glyph. The last glyph adjustment is zero.
If this method succeeds, it returns
GetKerningPairAdjustments isn't a direct replacement for GDI's character based GetKerningPairs, but it serves the same role, without the client needing to cache them locally. GetKerningPairAdjustments also uses glyph id's directly rather than UCS-2 characters (how the kern table actually stores them), which avoids glyph collapse and ambiguity, such as the dash and hyphen, or space and non-breaking space.
Newer fonts may have only GPOS kerning instead of the legacy pair-table kerning. Such fonts, like Gabriola, will only return 0's for adjustments. GetKerningPairAdjustments doesn't virtualize and flatten these GPOS entries into kerning pairs.
You can realize a performance benefit by calling
Determines whether the font supports pair-kerning.
-Returns TRUE if the font supports kerning pairs, otherwise
If the font doesn't support pair table kerning, you don't need to call
Determines the recommended rendering mode for the font, using the specified size and rendering parameters.
-The logical size of the font in DIP units. A DIP ("device-independent pixel") equals 1/96 inch.
The number of physical pixels per DIP in a horizontal position. For example, if the DPI of the rendering surface is 96, this value is 1.0f. If the DPI is 120, this value is 120.0f/96.
The number of physical pixels per DIP in a vertical position. For example, if the DPI of the rendering surface is 96, this value is 1.0f. If the DPI is 120, this value is 120.0f/96.
Specifies the world transform.
Whether the glyphs in the run are sideways or not.
A
The measuring method that will be used for glyphs in the font. Renderer implementations may choose different rendering modes for different measuring methods, for example:
When this method returns, contains a value that indicates the recommended rendering mode to use.
If this method succeeds, it returns
This method should be used to determine the actual rendering mode in cases where the rendering mode of the rendering params object is
Retrieves the vertical forms of the nominal glyphs retrieved from GetGlyphIndices.
-The number of glyphs to retrieve.
Original glyph indices from cmap.
The vertical form of glyph indices.
If this method succeeds, it returns
The retrieval uses the font's 'vert' table. This is used in CJK vertical layout so the correct characters are shown.
Call GetGlyphIndices to get the nominal glyph indices, followed by calling this to remap the to the substituted forms, when the run is sideways, and the font has vertical glyph variants. See HasVerticalGlyphVariants for more info. -
-Determines whether the font has any vertical glyph variants.
-Returns TRUE if the font contains vertical glyph variants, otherwise
For OpenType fonts, HasVerticalGlyphVariants returns TRUE if the font contains a "vert" feature.
Represents an absolute reference to a font face.
This interface contains the font face type, appropriate file references, and face identification data.
You obtain various font data like metrics, names, and glyph outlines from the
This interface adds the ability to check if a color rendering path is potentially necessary.
-Allows you to determine if a color rendering path is potentially necessary.
-Gets the number of color palettes defined by the font.
-Get the number of entries in each color palette.
-Allows you to determine if a color rendering path is potentially necessary.
-Returns TRUE if a color rendering path is potentially necessary.
Gets the number of color palettes defined by the font.
-The return value is zero if the font has no color information. Color fonts are required to define at least one palette, with palette index zero reserved as the default palette.
Get the number of entries in each color palette.
-The number of entries in each color palette. All color palettes in a font have the same number of palette entries. The return value is zero if the font has no color information.
Gets color values from the font's color palette.
-Zero-based index of the color palette. If the font does not have a palette with the specified index, the method returns DWRITE_E_NOCOLOR.
Zero-based index of the first palette entry to read.
Number of palette entries to read.
Array that receives the color values.
This method can return one of these values.
Return value | Description |
---|---|
| The sum of firstEntryIndex and entryCount is greater than the actual number of palette entries that's returned by the GetPaletteEntryCount method. |
| The font doesn't have a palette with the specified palette index. |
?
Determines the recommended text rendering and grid-fit mode to be used based on the font, size, world transform, and measuring mode.
-Logical font size in DIPs.
Number of pixels per logical inch in the horizontal direction.
Number of pixels per logical inch in the vertical direction.
A
Specifies whether the font is sideways. TRUE if the font is sideways; otherwise,
A
A
A reference to a
A reference to a variable that receives a
A reference to a variable that receives a
If this method succeeds, it returns
Represents an absolute reference to a font face.
-Gets a font face reference that identifies this font.
-Gets the PANOSE values from the font, used for font selection and matching.
-This method doesn't simulate these values, such as substituting a weight or proportion inferred on other values. If the font doesn't specify them, they are all set to 'any' (0).
-Gets the weight of this font.
-Gets the stretch (also known as width) of this font.
-Gets the style (also known as slope) of this font.
-Creates a localized strings object that contains the family names for the font family, indexed by locale name.
-Creates a localized strings object that contains the face names for the font (for example, Regular or Bold), indexed by locale name.
-Gets a font face reference that identifies this font.
-A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Gets the PANOSE values from the font, used for font selection and matching.
-A reference to a
This method doesn't simulate these values, such as substituting a weight or proportion inferred on other values. If the font doesn't specify them, they are all set to 'any' (0).
-Gets the weight of this font.
-Returns a
Gets the stretch (also known as width) of this font.
-Returns a
Gets the style (also known as slope) of this font.
-Returns a
Creates a localized strings object that contains the family names for the font family, indexed by locale name.
-A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Creates a localized strings object that contains the face names for the font (for example, Regular or Bold), indexed by locale name.
-A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Gets a localized strings collection that contains the specified informational strings, indexed by locale name.
-A
A reference to a memory block that receives a reference to a
A reference to a variable that receives whether the font contains the specified string ID. TRUE if the font contains the specified string ID; otherwise,
If the font doesn't contain the specified string, the return value is
Determines whether the font supports the specified character.
-A Unicode (UCS-4) character value.
Returns whether the font supports the specified character. Returns TRUE if the font has the specified character; otherwise,
Determines the recommended text rendering and grid-fit mode to be used based on the font, size, world transform, and measuring mode.
-Logical font size in DIPs.
Number of pixels per logical inch in the horizontal direction.
Number of pixels per logical inch in the vertical direction.
A
Specifies whether the font is sideways. TRUE if the font is sideways; otherwise,
A
A
A reference to a
A reference to a variable that receives a
A reference to a variable that receives a
If this method succeeds, it returns
Determines whether the character is locally downloaded from the font.
-A Unicode (UCS-4) character value.
Returns TRUE if the font has the specified character locally available,
Determines whether the glyph is locally downloaded from the font.
-Glyph identifier.
Returns TRUE if the font has the specified glyph locally available.
Determines whether the specified characters are local.
-Array of characters.
The number of elements in the character array.
Specifies whether to enqueue a download request if any of the specified characters are not local.
Receives TRUE if all of the specified characters are local,
If this method succeeds, it returns
Determines whether the specified glyphs are local.
-Array of glyph indices.
The number of elements in the glyph index array.
Specifies whether to enqueue a download request if any of the specified glyphs are not local.
Receives TRUE if all of the specified glyphs are local,
If this method succeeds, it returns
Represents an absolute reference to a font face. It contains font face type, appropriate file references and face identification data. Various font data such as metrics, names and glyph outlines are obtained from
Gets the available image formats of a specific glyph and ppem.
-Glyphs often have at least TrueType or CFF outlines, but they may also have SVG outlines, or they may have only bitmaps with no TrueType/CFF outlines. Some image formats, notably the PNG/JPEG ones, are size specific and will return no match when there isn't an entry in that size range.
Glyph ids beyond the glyph count return
Gets the available image formats of a specific glyph and ppem.
-The ID of the glyph.
Specifies which formats are supported in the font.
If this method succeeds, it returns
Glyphs often have at least TrueType or CFF outlines, but they may also have SVG outlines, or they may have only bitmaps with no TrueType/CFF outlines. Some image formats, notably the PNG/JPEG ones, are size specific and will return no match when there isn't an entry in that size range.
Glyph ids beyond the glyph count return
Gets the available image formats of a specific glyph and ppem.
-If this method succeeds, it returns
Glyphs often have at least TrueType or CFF outlines, but they may also have SVG outlines, or they may have only bitmaps with no TrueType/CFF outlines. Some image formats, notably the PNG/JPEG ones, are size specific and will return no match when there isn't an entry in that size range.
Glyph ids beyond the glyph count return
Gets a reference to the glyph data based on the desired image format.
-The ID of the glyph to retrieve image data for.
Requested pixels per em.
Specifies which formats are supported in the font.
On return contains data for a glyph.
If this method succeeds, it returns
The glyphDataContext must be released via ReleaseGlyphImageData when done if the data is not empty, similar to
The DWRITE_GLYPH_IMAGE_DATA::uniqueDataId is valuable for caching purposes so that if the same resource is returned more than once, an existing resource can be quickly retrieved rather than needing to reparse or decompress the data.
The function only returns SVG or raster data - requesting TrueType/CFF/COLR data returns DWRITE_E_INVALIDARG. Those must be drawn via DrawGlyphRun or queried using GetGlyphOutline instead. Exactly one format may be requested or else the function returns DWRITE_E_INVALIDARG. If the glyph does not have that format, the call is not an error, but the function returns empty data.
-Releases the table data obtained from ReadGlyphData.
-Opaque context from ReadGlyphData.
Represents a reference to a font face. A uniquely identifying reference to a font, from which you can create a font face to query font metrics and use for rendering. A font face reference consists of a font file, font face index, and font face simulation. The file data may or may not be physically present on the local machine yet.
-Obtains the zero-based index of the font face in its font file or files. If the font files contain a single face, the return value is zero.
-Obtains the algorithmic style simulation flags of a font face.
-Obtains the font file representing a font face.
-Get the local size of the font face in bytes, which will always be less than or equal to GetFullSize. If the locality is remote, this value is zero. If full, this value will equal GetFileSize.
-Get the total size of the font face in bytes.
-Get the last modified date.
-Get the locality of this font face reference.
-You can always successfully create a font face from a fully local font. Attempting to create a font face on a remote or partially local font may fail with DWRITE_E_REMOTEFONT. This function may change between calls depending on background downloads and whether cached data expires.
-Creates a font face from the reference for use with layout, shaping, or rendering.
-Newly created font face object, or nullptr in the case of failure.
If this method succeeds, it returns
This function can fail with DWRITE_E_REMOTEFONT if the font is not local.
-Creates a font face with alternate font simulations, for example, to explicitly simulate a bold font face out of a regular variant.
-Font face simulation flags for algorithmic emboldening and italicization.
Newly created font face object, or nullptr in the case of failure.
If this method succeeds, it returns
This function can fail with DWRITE_E_REMOTEFONT if the font is not local.
-Obtains the zero-based index of the font face in its font file or files. If the font files contain a single face, the return value is zero.
-the zero-based index of the font face in its font file or files. If the font files contain a single face, the return value is zero.
Obtains the algorithmic style simulation flags of a font face.
-Returns the algorithmic style simulation flags of a font face.
Obtains the font file representing a font face.
-If this method succeeds, it returns
Get the local size of the font face in bytes, which will always be less than or equal to GetFullSize. If the locality is remote, this value is zero. If full, this value will equal GetFileSize.
-the local size of the font face in bytes, which will always be less than or equal to GetFullSize. If the locality is remote, this value is zero. If full, this value will equal GetFileSize.
Get the total size of the font face in bytes.
-Returns the total size of the font face in bytes. If the locality is remote, this value is unknown and will be zero.
Get the last modified date.
-Returns the last modified date. The time may be zero if the font file loader does not expose file time.
If this method succeeds, it returns
Get the locality of this font face reference.
-Returns the locality of this font face reference.
You can always successfully create a font face from a fully local font. Attempting to create a font face on a remote or partially local font may fail with DWRITE_E_REMOTEFONT. This function may change between calls depending on background downloads and whether cached data expires.
-Adds a request to the font download queue (
If this method succeeds, it returns
Adds a request to the font download queue (
If this method succeeds, it returns
Downloading a character involves downloading every glyph it depends on directly or indirectly, via font tables (cmap, GSUB, COLR, glyf).
-Adds a request to the font download queue (
If this method succeeds, it returns
Downloading a glyph involves downloading any other glyphs it depends on from the font tables (GSUB, COLR, glyf).
-Adds a request to the font download queue (
If this method succeeds, it returns
Allows you to create Unicode font fallback mappings and create a font fall back object from those mappings.
-Appends a single mapping to the list. Call this once for each additional mapping.
-Unicode ranges that apply to this mapping.
Number of Unicode ranges.
List of target family name strings.
Number of target family names.
Optional explicit font collection for this mapping.
Locale of the context.
Base family name to match against, if applicable.
Scale factor to multiply the result target font by.
If this method succeeds, it returns
Add all the mappings from an existing font fallback object.
-An existing font fallback object.
If this method succeeds, it returns
Creates the finalized fallback object from the mappings added.
-Contains an address of a reference to the created fallback list.
If this method succeeds, it returns
Represents a family of related fonts.
-A font family is a set of fonts that share the same family name, such as "Times New Roman", but that differ in features. These feature differences include style, such as italic, and weight, such as bold. The following illustration shows examples of fonts that are members of the "Times New Roman" font family.
An
* pFontFamily = null ; // Get the font family. - if (SUCCEEDED(hr)) - { hr = pFontCollection->GetFontFamily(i, &pFontFamily); - } -
The font family name is used to specify the font family for text layout and text format objects. You can get a list of localized font family names from an
-* pFamilyNames = null ; // Get a list of localized strings for the family name. - if (SUCCEEDED(hr)) - { hr = pFontFamily->GetFamilyNames(&pFamilyNames); - } -
Creates a localized strings object that contains the family names for the font family, indexed by locale name.
- The following code example shows how to get the font family name from a
-* pFamilyNames = null ; // Get a list of localized strings for the family name. - if (SUCCEEDED(hr)) - { hr = pFontFamily->GetFamilyNames(&pFamilyNames); - } UINT32 index = 0; -exists = false; wchar_t localeName[LOCALE_NAME_MAX_LENGTH]; if (SUCCEEDED(hr)) - { // Get the default locale for this user. int defaultLocaleSuccess = GetUserDefaultLocaleName(localeName, LOCALE_NAME_MAX_LENGTH); // If the default locale is returned, find that locale name, otherwise use "en-us". if (defaultLocaleSuccess) { hr = pFamilyNames->FindLocaleName(localeName, &index, &exists); } if (SUCCEEDED(hr) && !exists) // if the above find did not find a match, retry with US English { hr = pFamilyNames->FindLocaleName(L"en-us", &index, &exists); } - } // If the specified locale doesn't exist, select the first on the list. - if (!exists) index = 0; UINT32 length = 0; // Get the string length. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetStringLength(index, &length); - } // Allocate a string big enough to hold the name. - wchar_t* name = new (std::nothrow) wchar_t[length+1]; - if (name == null ) - { hr = E_OUTOFMEMORY; - } // Get the family name. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetString(index, name, length+1); - } -
Creates a localized strings object that contains the family names for the font family, indexed by locale name.
-The address of a reference to the newly created
If this method succeeds, it returns
The following code example shows how to get the font family name from a
-* pFamilyNames = null ; // Get a list of localized strings for the family name. - if (SUCCEEDED(hr)) - { hr = pFontFamily->GetFamilyNames(&pFamilyNames); - } UINT32 index = 0; -exists = false; wchar_t localeName[LOCALE_NAME_MAX_LENGTH]; if (SUCCEEDED(hr)) - { // Get the default locale for this user. int defaultLocaleSuccess = GetUserDefaultLocaleName(localeName, LOCALE_NAME_MAX_LENGTH); // If the default locale is returned, find that locale name, otherwise use "en-us". if (defaultLocaleSuccess) { hr = pFamilyNames->FindLocaleName(localeName, &index, &exists); } if (SUCCEEDED(hr) && !exists) // if the above find did not find a match, retry with US English { hr = pFamilyNames->FindLocaleName(L"en-us", &index, &exists); } - } // If the specified locale doesn't exist, select the first on the list. - if (!exists) index = 0; UINT32 length = 0; // Get the string length. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetStringLength(index, &length); - } // Allocate a string big enough to hold the name. - wchar_t* name = new (std::nothrow) wchar_t[length+1]; - if (name == null ) - { hr = E_OUTOFMEMORY; - } // Get the family name. - if (SUCCEEDED(hr)) - { hr = pFamilyNames->GetString(index, name, length+1); - } -
Gets the font that best matches the specified properties.
-A value that is used to match a requested font weight.
A value that is used to match a requested font stretch.
A value that is used to match a requested font style.
When this method returns, contains the address of a reference to the newly created
Gets a list of fonts in the font family ranked in order of how well they match the specified properties.
-A value that is used to match a requested font weight.
A value that is used to match a requested font stretch.
A value that is used to match a requested font style.
An address of a reference to the newly created
Represents a family of related fonts.
-Gets the current location of a font given its zero-based index.
-Zero-based index of the font in the font list.
Returns a
For fully local files, the result will always be
Gets a font given its zero-based index.
-Zero-based index of the font in the font list.
A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Gets a font face reference given its zero-based index.
-Zero-based index of the font in the font list.
A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Gets the number of fonts in the font list.
-Gets the font collection that contains the fonts in the font list.
-Gets the number of fonts in the font list.
-Gets the font collection that contains the fonts in the font list.
-When this method returns, contains the address of a reference to the current
If this method succeeds, it returns
Gets the number of fonts in the font list.
-The number of fonts in the font list.
Gets a font given its zero-based index.
-Zero-based index of the font in the font list.
When this method returns, contains the address of a reference to the newly created
Represents a list of fonts.
-Gets the current location of a font given its zero-based index.
-Zero-based index of the font in the font list.
Returns a
For fully local files, the result will always be
Gets a font given its zero-based index.
-Zero-based index of the font in the font list.
A reference to a memory block that receives a reference to a
If this method succeeds, it returns
This method returns DWRITE_E_REMOTEFONT if it could not construct a remote font.
Gets a font face reference given its zero-based index.
-Zero-based index of the font in the font list.
A reference to a memory block that receives a reference to a
If this method succeeds, it returns
Get the number of total fonts in the set.
-Get the number of total fonts in the set.
-Returns the number of total fonts in the set.
Gets a reference to the font at the specified index, which may be local or remote.
-Zero-based index of the font.
Receives a reference the font face reference object, or nullptr on failure.
If this method succeeds, it returns
Gets the index of the matching font face reference in the font set, with the same file, face index, and simulations.
-Font face object that specifies the physical font.
Receives the zero-based index of the matching font if the font was found, or UINT_MAX otherwise.
Receives TRUE if the font exists or
If this method succeeds, it returns
Gets the index of the matching font face reference in the font set, with the same file, face index, and simulations.
-Font face object that specifies the physical font.
Receives the zero-based index of the matching font if the font was found, or UINT_MAX otherwise.
Receives TRUE if the font exists or
If this method succeeds, it returns
Returns all unique property values in the set, which can be used for purposes such as displaying a family list or tag cloud. Values are returned in priority order according to the language list, such that if a font contains more than one localized name, the preferred one will be returned.
-Font property of interest.
Receives a reference to the newly created strings list.
If this method succeeds, it returns
Returns all unique property values in the set, which can be used for purposes such as displaying a family list or tag cloud. Values are returned in priority order according to the language list, such that if a font contains more than one localized name, the preferred one will be returned.
-Font property of interest.
List of semicolon delimited language names in preferred order. When a particular string like font family has more than one localized name, the first match is returned. For example, suppose the font set includes the Meiryo family, which has both Japanese and English family names. The returned list of distinct family names would include either the Japanese name (if "ja-jp" was specified as a preferred locale) or the English name (in all other cases).
Receives a reference to the newly created strings list.
If this method succeeds, it returns
Returns all unique property values in the set, which can be used for purposes such as displaying a family list or tag cloud. Values are returned in priority order according to the language list, such that if a font contains more than one localized name, the preferred one will be returned.
-Font property of interest.
List of semicolon delimited language names in preferred order. When a particular string like font family has more than one localized name, the first match is returned. For example, suppose the font set includes the Meiryo family, which has both Japanese and English family names. The returned list of distinct family names would include either the Japanese name (if "ja-jp" was specified as a preferred locale) or the English name (in all other cases).
Receives a reference to the newly created strings list.
Receives a reference to the newly created strings list.
If this method succeeds, it returns
Returns how many times a given property value occurs in the set.
-Font property of interest.
Receives how many times the property occurs.
If this method succeeds, it returns
Returns a subset of fonts filtered by the given properties.
-List of properties to filter using.
The number of properties to filter.
The subset of fonts that match the properties, or nullptr on failure.
The subset of fonts that match the properties, or nullptr on failure.
If this method succeeds, it returns
If no fonts matched the filter, the subset will be empty (GetFontCount returns 0), but the function does not return an error. The subset will always be equal to or less than the original set. If you only want to filter out remote fonts, you may pass null in properties and zero in propertyCount.
-Returns a subset of fonts filtered by the given properties.
-List of properties to filter using.
The number of properties to filter.
The subset of fonts that match the properties, or nullptr on failure.
If this method succeeds, it returns
If no fonts matched the filter, the subset will be empty (GetFontCount returns 0), but the function does not return an error. The subset will always be equal to or less than the original set. If you only want to filter out remote fonts, you may pass null in properties and zero in propertyCount.
-Contains methods for building a font set.
-Adds a reference to a font to the set being built. The caller supplies enough information to search on, avoiding the need to open the potentially non-local font. Any properties not supplied by the caller will be missing, and those properties will not be available as filters in GetMatchingFonts. GetPropertyValues for missing properties will return an empty string list. The properties passed should generally be consistent with the actual font contents, but they need not be. You could, for example, alias a font using a different name or unique identifier, or you could set custom tags not present in the actual font.
-Reference to the font.
List of properties to associate with the reference.
The number of properties defined.
If this method succeeds, it returns
Adds a reference to a font to the set being built. The caller supplies enough information to search on, avoiding the need to open the potentially non-local font. Any properties not supplied by the caller will be missing, and those properties will not be available as filters in GetMatchingFonts. GetPropertyValues for missing properties will return an empty string list. The properties passed should generally be consistent with the actual font contents, but they need not be. You could, for example, alias a font using a different name or unique identifier, or you could set custom tags not present in the actual font.
-Reference to the font.
If this method succeeds, it returns
Appends an existing font set to the one being built, allowing one to aggregate two sets or to essentially extend an existing one.
-Font set to append font face references from.
If this method succeeds, it returns
Creates a font set from all the font face references added so far with AddFontFaceReference.
-Contains the newly created font set object, or nullptr in case of failure.
If this method succeeds, it returns
Creating a font set takes less time if the references were added with metadata rather than needing to extract the metadata from the font file.
-Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Provides interoperability with GDI, such as methods to convert a font face to a
Creates a font object that matches the properties specified by the
Structure containing a GDI-compatible font description.
The font collection to search. If
Receives a newly created font object if successful, or
Reads the font signature from the given font face.
-Font face to read font signature from.
Font signature from the OS/2 table, ulUnicodeRange and ulCodePageRange.
Reads the font signature from the given font face.
-Font face to read font signature from.
Font signature from the OS/2 table, ulUnicodeRange and ulCodePageRange.
Gets a list of matching fonts based on the specified
Structure containing a GDI-compatible font description.
The font set to search.
>Receives the filtered font set if successful.
If this method succeeds, it returns
Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Represents an absolute reference to a font face which contains font face type, appropriate file references, face identification data and various font data such as metrics, names and glyph outlines.
-Represents text rendering settings for glyph rasterization and filtering.
-Gets the amount of contrast enhancement to use for grayscale antialiasing.
-Gets the amount of contrast enhancement to use for grayscale antialiasing.
-The contrast enhancement value. Valid values are greater than or equal to zero.
Represents text rendering settings for glyph rasterization and filtering.
-Gets the grid fitting mode.
-Gets the grid fitting mode.
-Returns a
Represents text rendering settings for glyph rasterization and filtering.
-Gets the rendering mode.
-Gets the rendering mode.
-Returns a
Represents a collection of strings indexed by number. An
Gets the number of strings in the string list.
-Gets the number of strings in the string list.
-Returns the number of strings in the string list.
Gets the length in characters (not including the null terminator) of the locale name with the specified index.
-Zero-based index of the locale name.
Receives the length in characters, not including the null terminator.
If this method succeeds, it returns
Copies the locale name with the specified index to the specified array.
-Zero-based index of the locale name.
Character array that receives the locale name.
Size of the array in characters. The size must include space for the terminating null character.
If this method succeeds, it returns
Gets the length in characters (not including the null terminator) of the string with the specified index.
-Zero-based index of the string.
Receives the length in characters of the string, not including the null terminator.
If this method succeeds, it returns
Copies the string with the specified index to the specified array.
-Zero-based index of the string.
Character array that receives the string.
Size of the array in characters. The size must include space for the terminating null character.
If this method succeeds, it returns
Analyzes various text properties for complex script processing.
-Returns 2x3 transform matrix for the respective angle to draw the glyph run.
Extends
If this method succeeds, it returns
Returns a complete list of OpenType features available for a script or font. If a feature is partially supported, then this method indicates that it is supported.
-The font face to get features from.
The script analysis for the script or font to check.
The locale name to check.
The maximum number of tags to return.
The actual number of tags returned.
An array of OpenType font feature tags.
If this method succeeds, it returns
Checks if a typographic feature is available for a glyph or a set of glyphs.
-The font face to read glyph information from.
The script analysis for the script or font to check.
The locale name to check.
The font feature tag to check.
The number of glyphs to check.
An array of glyph indices to check.
An array of integers that indicate whether or not the font feature applies to each glyph specified.
If this method succeeds, it returns
Describes the font and paragraph properties used to format text, and it describes locale information. This interface has all the same methods as
Get or sets the preferred orientation of glyphs when using a vertical reading direction.
-Gets or sets the wrapping mode of the last line.
-Gets or sets the optical margin alignment for the text format.
-Gets or sets the current fallback. If none was ever set since creating the layout, it will be nullptr.
-Sets the orientation of a text format.
-The orientation to apply to the text format.
If this method succeeds, it returns
Get the preferred orientation of glyphs when using a vertical reading direction.
-The preferred orientation of glyphs when using a vertical reading direction.
Sets the wrapping mode of the last line.
-If set to
The last line is wrapped by default.
If this method succeeds, it returns
Gets the wrapping mode of the last line.
-Returns
Sets the optical margin alignment for the text format.
By default, glyphs are aligned to the margin by the default origin and side-bearings of the glyph. If you specify DWRITE_OPTICAL_ALIGNMENT_USING_SIDE_BEARINGS, then the alignment Suses the side bearings to offset the glyph from the aligned edge to ensure the ink of the glyphs are aligned.
-The optical alignment to set.
If this method succeeds, it returns
Gets the optical margin alignment for the text format.
-The optical alignment.
Applies the custom font fallback onto the layout. If none is set, it uses the default system fallback list.
-The font fallback to apply to the layout.
If this method succeeds, it returns
Gets the current fallback. If none was ever set since creating the layout, it will be nullptr.
-Contains an address of a reference to the the current font fallback object.
If this method succeeds, it returns
Describes the font and paragraph properties used to format text, and it describes locale information.
-Gets or sets the line spacing adjustment set for a multiline text paragraph.
-Set line spacing.
-How to manage space between lines.
If this method succeeds, it returns
Gets the line spacing adjustment set for a multiline text paragraph.
-A structure describing how the space between lines is managed for the paragraph.
If this method succeeds, it returns
Represents a block of text after it has been fully analyzed and formatted.
-Enables or disables pair-kerning on a given text range.
-The flag that indicates whether text is pair-kerned.
The text range to which the change applies.
If this method succeeds, it returns
Gets whether or not pair-kerning is enabled at given position.
-The current text position.
The flag that indicates whether text is pair-kerned.
The position range of the current format.
If this method succeeds, it returns
Sets the spacing between characters.
-The spacing before each character, in reading order.
The spacing after each character, in reading order.
The minimum advance of each character, to prevent characters from becoming too thin or zero-width. This must be zero or greater.
Text range to which this change applies.
If this method succeeds, it returns
Gets the spacing between characters.
-The current text position.
The spacing before each character, in reading order.
The spacing after each character, in reading order.
The minimum advance of each character, to prevent characters from becoming too thin or zero-width. This must be zero or greater.
The position range of the current format.
If this method succeeds, it returns
Represents a block of text after it has been fully analyzed and formatted.
-Retrieves overall metrics for the formatted string.
-Get or sets the preferred orientation of glyphs when using a vertical reading direction.
-Get or sets whether or not the last word on the last line is wrapped.
-Get or sets how the glyphs align to the edges the margin.
-Get or sets the current font fallback object.
-Retrieves overall metrics for the formatted string.
-When this method returns, contains the measured distances of text and associated content after being formatted.
If this method succeeds, it returns
Set the preferred orientation of glyphs when using a vertical reading direction.
-Preferred glyph orientation.
If this method succeeds, it returns
Get the preferred orientation of glyphs when using a vertical reading direction.
-Set whether or not the last word on the last line is wrapped.
-Line wrapping option.
If this method succeeds, it returns
Get whether or not the last word on the last line is wrapped.
-Set how the glyphs align to the edges the margin. Default behavior is to align glyphs using their default glyphs metrics, which include side bearings.
-Optical alignment option.
If this method succeeds, it returns
Get how the glyphs align to the edges the margin.
-Apply a custom font fallback onto layout. If none is specified, the layout uses the system fallback list.
- Custom font fallback created from
If this method succeeds, it returns
Get the current font fallback object.
-The current font fallback object.
If this method succeeds, it returns
Gets or sets line spacing information.
-Invalidates the layout, forcing layout to remeasure before calling the metrics or drawing functions. This is useful if the locality of a font changes, and layout should be redrawn, or if the size of a client implemented
If this method succeeds, it returns
Set line spacing.
-How to manage space between lines.
If this method succeeds, it returns
Gets line spacing information.
-How to manage space between lines.
If this method succeeds, it returns
Retrieves properties of each line.
-The array to fill with line information.
The maximum size of the lineMetrics array.
The actual size of the lineMetrics array that is needed.
If this method succeeds, it returns
If maxLineCount is not large enough E_NOT_SUFFICIENT_BUFFER, which is equivalent to HRESULT_FROM_WIN32(
Represents a set of application-defined callbacks that perform rendering of text, inline objects, and decorations such as underlines.
-The
Vertical rise of the caret in font design units. Rise / Run yields the caret angle. Rise = 1 for perfectly upright fonts (non-italic).
Horizontal run of the caret in font design units. Rise / Run yields the caret angle. Run = 0 for perfectly upright fonts (non-italic).
Horizontal offset of the caret, in font design units, along the baseline for good appearance. Offset = 0 for perfectly upright fonts (non-italic).
Contains information about a glyph cluster.
-The total advance width of all glyphs in the cluster.
The number of text positions in the cluster.
Indicates whether a line can be broken right after the cluster.
Indicates whether the cluster corresponds to a whitespace character.
Indicates whether the cluster corresponds to a newline character.
Indicates whether the cluster corresponds to a soft hyphen character.
Indicates whether the cluster is read from right to left.
Reserved for future use.
Contains the information needed by renderers to draw glyph runs with glyph color information. All coordinates are in device independent pixels (DIPs).
-Glyph run to draw for this layer.
Pointer to the glyph run description for this layer. This may be
X coordinate of the baseline origin for the layer.
Y coordinate of the baseline origin for the layer.
Color value of the run; if all members are zero, the run should be drawn using the current brush.
Zero-based index into the font?s color palette; if this is 0xFFFF, the run should be drawn using the current brush.
Represents a color glyph run. The
Glyph run to draw for this layer.
Pointer to the glyph run description for this layer. This may be
X coordinate of the baseline origin for the layer.
Y coordinate of the baseline origin for the layer.
Color value of the run; if all members are zero, the run should be drawn using the current brush.
Zero-based index into the font?s color palette; if this is 0xFFFF, the run should be drawn using the current brush.
Type of glyph image format for this color run. Exactly one type will be set since TranslateColorGlyphRun has already broken down the run into separate parts.
Measuring mode to use for this glyph run.
The
The number of font design units per em unit. Font files use their own coordinate system of font design units. A font design unit is the smallest measurable unit in the em square, an imaginary square that is used to size and align glyphs. The concept of em square is used as a reference scale factor when defining font size and device transformation semantics. The size of one em square is also commonly used to compute the paragraph identation value.
The ascent value of the font face in font design units. Ascent is the distance from the top of font character alignment box to the English baseline.
The descent value of the font face in font design units. Descent is the distance from the bottom of font character alignment box to the English baseline.
The line gap in font design units. Recommended additional white space to add between lines to improve legibility. The recommended line spacing (baseline-to-baseline distance) is the sum of ascent, descent, and lineGap. The line gap is usually positive or zero but can be negative, in which case the recommended line spacing is less than the height of the character alignment box.
The cap height value of the font face in font design units. Cap height is the distance from the English baseline to the top of a typical English capital. Capital "H" is often used as a reference character for the purpose of calculating the cap height value.
The x-height value of the font face in font design units. x-height is the distance from the English baseline to the top of lowercase letter "x", or a similar lowercase character.
The underline position value of the font face in font design units. Underline position is the position of underline relative to the English baseline. The value is usually made negative in order to place the underline below the baseline.
The suggested underline thickness value of the font face in font design units.
The strikethrough position value of the font face in font design units. Strikethrough position is the position of strikethrough relative to the English baseline. The value is usually made positive in order to place the strikethrough above the baseline.
The suggested strikethrough thickness value of the font face in font design units.
The
struct : public
- {
- ...
- };
- The number of font design units per em unit. Font files use their own coordinate system of font design units. A font design unit is the smallest measurable unit in the em square, an imaginary square that is used to size and align glyphs. The concept of em square is used as a reference scale factor when defining font size and device transformation semantics. The size of one em square is also commonly used to compute the paragraph identation value.
The ascent value of the font face in font design units. Ascent is the distance from the top of font character alignment box to the English baseline.
The descent value of the font face in font design units. Descent is the distance from the bottom of font character alignment box to the English baseline.
The line gap in font design units. Recommended additional white space to add between lines to improve legibility. The recommended line spacing (baseline-to-baseline distance) is the sum of ascent, descent, and lineGap. The line gap is usually positive or zero but can be negative, in which case the recommended line spacing is less than the height of the character alignment box.
The cap height value of the font face in font design units. Cap height is the distance from the English baseline to the top of a typical English capital. Capital "H" is often used as a reference character for the purpose of calculating the cap height value.
The x-height value of the font face in font design units. x-height is the distance from the English baseline to the top of lowercase letter "x", or a similar lowercase character.
The underline position value of the font face in font design units. Underline position is the position of underline relative to the English baseline. The value is usually made negative in order to place the underline below the baseline.
The suggested underline thickness value of the font face in font design units.
The strikethrough position value of the font face in font design units. Strikethrough position is the position of strikethrough relative to the English baseline. The value is usually made positive in order to place the strikethrough above the baseline.
The suggested strikethrough thickness value of the font face in font design units.
Left edge of accumulated bounding blackbox of all glyphs in the font.
Top edge of accumulated bounding blackbox of all glyphs in the font.
Right edge of accumulated bounding blackbox of all glyphs in the font.
Bottom edge of accumulated bounding blackbox of all glyphs in the font.
Horizontal position of the subscript relative to the baseline origin. This is typically negative (to the left) in italic and oblique fonts, and zero in regular fonts.
Vertical position of the subscript relative to the baseline. This is typically negative.
Horizontal size of the subscript em box in design units, used to scale the simulated subscript relative to the full em box size. This is the numerator of the scaling ratio where denominator is the design units per em. If this member is zero, the font does not specify a scale factor, and the client uses its own policy.
Vertical size of the subscript em box in design units, used to scale the simulated subscript relative to the full em box size. This is the numerator of the scaling ratio where denominator is the design units per em. If this member is zero, the font does not specify a scale factor, and the client uses its own policy.
Horizontal position of the superscript relative to the baseline origin. This is typically positive (to the right) in italic and oblique fonts, and zero in regular fonts.
Vertical position of the superscript relative to the baseline. This is typically positive.
Horizontal size of the superscript em box in design units, used to scale the simulated superscript relative to the full em box size. This is the numerator of the scaling ratio where denominator is the design units per em. If this member is zero, the font does not specify a scale factor, and the client should use its own policy.
Vertical size of the superscript em box in design units, used to scale the simulated superscript relative to the full em box size. This is the numerator of the scaling ratio where denominator is the design units per em. If this member is zero, the font does not specify a scale factor, and the client should use its own policy.
A Boolean value that indicates that the ascent, descent, and lineGap are based on newer 'typographic' values in the font, rather than legacy values.
Font property used for filtering font sets and building a font set with explicit properties.
-Specifies the requested font property, such as
Specifies the value, such as "Segoe UI".
Specifies the locale to use, such as "en-US". Simply leave this empty when used with the font set filtering functions, as they will find a match regardless of language. For passing to AddFontFaceReference, the localeName specifies the language of the property value.
Data for a single glyph from GetGlyphImageData.
-Pointer to the glyph data.
Size of glyph data in bytes.
Unique identifier for the glyph data. Clients may use this to cache a parsed/decompressed version and tell whether a repeated call to the same font returns the same data.
Pixels per em of the returned data. For non-scalable raster data (PNG/TIFF/JPG), this can be larger or smaller than requested from GetGlyphImageData when there isn't an exact match. For scaling intermediate sizes, use: desired pixels per em * font em size / actual pixels per em.
Size of image when the format is pixel data.
Left origin along the horizontal Roman baseline.
Right origin along the horizontal Roman baseline.
Top origin along the vertical central baseline.
Bottom origin along vertical central baseline.
Specifies the metrics of an individual glyph. The units depend on how the metrics are obtained.
-Specifies the X offset from the glyph origin to the left edge of the black box. The glyph origin is the current horizontal writing position. A negative value means the black box extends to the left of the origin (often true for lowercase italic 'f').
Specifies the X offset from the origin of the current glyph to the origin of the next glyph when writing horizontally.
Specifies the X offset from the right edge of the black box to the origin of the next glyph when writing horizontally. The value is negative when the right edge of the black box overhangs the layout box.
Specifies the vertical offset from the vertical origin to the top of the black box. Thus, a positive value adds whitespace whereas a negative value means the glyph overhangs the top of the layout box.
Specifies the Y offset from the vertical origin of the current glyph to the vertical origin of the next glyph when writing vertically. Note that the term "origin" by itself denotes the horizontal origin. The vertical origin is different. Its Y coordinate is specified by verticalOriginY value, and its X coordinate is half the advanceWidth to the right of the horizontal origin.
Specifies the vertical distance from the bottom edge of the black box to the advance height. This is positive when the bottom edge of the black box is within the layout box, or negative when the bottom edge of black box overhangs the layout box.
Specifies the Y coordinate of a glyph's vertical origin, in the font's design coordinate system. The y coordinate of a glyph's vertical origin is the sum of the glyph's top side bearing and the top (that is, yMax) of the glyph's bounding box.
The optional adjustment to a glyph's position.
-An glyph offset changes the position of a glyph without affecting the pen position. Offsets are in logical, pre-transform units.
-The offset in the advance direction of the run. A positive advance offset moves the glyph to the right (in pre-transform coordinates) if the run is left-to-right or to the left if the run is right-to-left.
The offset in the ascent direction, that is, the direction ascenders point. A positive ascender offset moves the glyph up (in pre-transform coordinates). A negative ascender offset moves the glyph down.
Describes the region obtained by a hit test.
-The first text position within the hit region.
The number of text positions within the hit region.
The x-coordinate of the upper-left corner of the hit region.
The y-coordinate of the upper-left corner of the hit region.
The width of the hit region.
The height of the hit region.
The BIDI level of the text positions within the hit region.
true if the hit region contains text; otherwise, false.
true if the text range is trimmed; otherwise, false.
Contains properties describing the geometric measurement of an - application-defined inline object.
-The width of the inline object.
The height of the inline object.
The distance from the top of the object to the point where it is lined up with the adjacent text. If the baseline is at the bottom, then baseline simply equals height.
A Boolean flag that indicates whether the object is to be placed upright or alongside the text baseline for vertical text.
The
Minimum amount of expansion to apply to the side of the glyph. This might vary from zero to infinity, typically being zero except for kashida.
Maximum amount of expansion to apply to the side of the glyph. This might vary from zero to infinity, being zero for fixed-size characters and connected scripts, and non-zero for discrete scripts, and non-zero for cursive scripts at expansion points.
Maximum amount of compression to apply to the side of the glyph. This might vary from zero up to the glyph cluster size.
Priority of this expansion point. Larger priorities are applied later, while priority zero does nothing.
Priority of this compression point. Larger priorities are applied later, while priority zero does nothing.
Allow this expansion point to use up any remaining slack space even after all expansion priorities have been used up.
Allow this compression point to use up any remaining space even after all compression priorities have been used up.
Apply expansion and compression to the leading edge of the glyph. This bit is
Apply expansion and compression to the trailing edge of the glyph. This bit is
Reserved
Contains information about a formatted line of text.
-The number of text positions in the text line. This includes any trailing whitespace and newline characters.
The number of whitespace positions at the end of the text line. Newline sequences are considered whitespace.
The number of characters in the newline sequence at the end of the text line. If the count is zero, then the text line was either wrapped or it is the end of the text.
The height of the text line.
The distance from the top of the text line to its baseline.
The line is trimmed.
Contains information about a formatted line of text.
-The number of text positions in the text line. This includes any trailing whitespace and newline characters.
The number of whitespace positions at the end of the text line. Newline sequences are considered whitespace.
The number of characters in the newline sequence at the end of the text line. If the count is zero, then the text line was either wrapped or it is the end of the text.
The height of the text line.
The distance from the top of the text line to its baseline.
The line is trimmed.
White space before the content of the line. This is included in the line height and baseline distances. If the line is formatted horizontally either with a uniform line spacing or with proportional line spacing, this value represents the extra space above the content.
White space after the content of the line. This is included in the height of the line. If the line is formatted horizontally either with a uniform line spacing or with proportional line spacing, this value represents the extra space below the content.
Method used to determine line spacing.
Spacing between lines. The interpretation of this parameter depends upon the line spacing method, as follows:
Distance from top of line to baseline. The interpretation of this parameter depends upon the line spacing method, as follows:
Proportion of the entire leading distributed before the line. The allowed value is between 0 and 1.0. The remaining leading is distributed after the line. It is ignored for the default and uniform line spacing methods. The leading that is available to distribute before or after the line depends on the values of the height and baseline parameters.
Specify whether
Indicates how much any visible DIPs (device independent pixels) overshoot each side of the layout or inline objects.
Positive overhangs indicate that the visible area extends outside the layout box or inline object, while negative values mean there is whitespace inside. The returned values are unaffected by rendering transforms or pixel snapping. Additionally, they may not exactly match the final target's pixel bounds after applying grid fitting and hinting.
-The distance from the left-most visible DIP to its left-alignment edge.
The distance from the top-most visible DIP to its top alignment edge.
The distance from the right-most visible DIP to its right-alignment edge.
The distance from the bottom-most visible DIP to its lower-alignment edge.
The
Stores the association of text and its writing system script, as well as some display attributes.
-The zero-based index representation of writing system script.
A value that indicates additional shaping requirement of text.
The
The standardized four character code for the given script.
Note??These only include the general Unicode scripts, not any additional ISO 15924 scripts for bibliographic distinction. ?The standardized numeric code, ranging 0-999.
Number of characters to estimate look-ahead for complex scripts. Latin and all Kana are generally 1. Indic scripts are up to 15, and most others are 8.
Note??Combining marks and variation selectors can produce clusters that are longer than these look-aheads, so this estimate is considered typical language use. Diacritics must be tested explicitly separately. ?Appropriate character to elongate the given script for justification. For example:
Restrict the caret to whole clusters, like Thai and Devanagari. Scripts such as Arabic by default allow navigation between clusters. Others like Thai always navigate across whole clusters.
The language uses dividers between words, such as spaces between Latin or the Ethiopic wordspace. Examples include Latin, Greek, Devanagari, and Ethiopic. Chinese, Korean, and Thai are excluded.
The characters are discrete units from each other. This includes both block scripts and clustered scripts. Examples include Latin, Greek, Cyrillic, Hebrew, Chinese, and Thai.
The language is a block script, expanding between characters. Examples include Chinese, Japanese, Korean, and Bopomofo.
The language is justified within glyph clusters, not just between glyph clusters, such as the character sequence of Thai Lu and Sara Am (U+E026, U+E033), which form a single cluster but still expand between them. Examples include Thai, Lao, and Khmer.
The script's clusters are connected to each other (such as the baseline-linked Devanagari), and no separation is added between characters.
Note??Cursively linked scripts like Arabic are also connected (but not all connected scripts are cursive). ?Examples include Devanagari, Arabic, Syriac, Bengala, Gurmukhi, and Ogham. Latin, Chinese, and Thaana are excluded.
The script is naturally cursive (Arabic and Syriac), meaning it uses other justification methods like kashida extension rather than inter-character spacing.
Note?? Although other scripts like Latin and Japanese might actually support handwritten cursive forms, they are not considered cursive scripts. ?Examples include Arabic, Syriac, and Mongolian. Thaana, Devanagari, Latin, and Chinese are excluded.
Reserved
Shaping output properties for an output glyph.
-Indicates that the glyph is shaped alone.
Reserved for future use.
Contains information regarding the size and placement of strikethroughs. All coordinates are in device independent pixels (DIPs).
-A value that indicates the width of the strikethrough, measured parallel to the baseline.
A value that indicates the thickness of the strikethrough, measured perpendicular to the baseline.
A value that indicates the offset of the strikethrough from the baseline. A positive offset represents a position below the baseline and a negative offset is above. Typically, the offset will be negative.
Reading direction of the text associated with the strikethrough. This value is used to interpret whether the width value runs horizontally or vertically.
Flow direction of the text associated with the strikethrough. This value is used to interpret whether the thickness value advances top to bottom, left to right, or right to left.
An array of characters containing the locale of the text that is the strikethrough is being drawn over.
The measuring mode can be useful to the renderer to determine how underlines are rendered, such as rounding the thickness to a whole pixel in GDI-compatible modes.
Contains the metrics associated with text after layout. All coordinates are in device independent pixels (DIPs).
-A value that indicates the left-most point of formatted text relative to the layout box, while excluding any glyph overhang.
A value that indicates the top-most point of formatted text relative to the layout box, while excluding any glyph overhang.
A value that indicates the width of the formatted text, while ignoring trailing whitespace at the end of each line.
The width of the formatted text, taking into account the trailing whitespace at the end of each line.
The height of the formatted text. The height of an empty string is set to the same value as that of the default font.
The initial width given to the layout. It can be either larger or smaller than the text content width, depending on whether the text was wrapped.
Initial height given to the layout. Depending on the length of the text, it may be larger or smaller than the text content height.
The maximum reordering count of any line of text, used to calculate the most number of hit-testing boxes needed. If the layout has no bidirectional text, or no text at all, the minimum level is 1.
Total number of lines.
Contains the metrics associated with text after layout. All coordinates are in device independent pixels (DIPs).
-A value that indicates the left-most point of formatted text relative to the layout box, while excluding any glyph overhang.
A value that indicates the top-most point of formatted text relative to the layout box, while excluding any glyph overhang.
A value that indicates the width of the formatted text, while ignoring trailing whitespace at the end of each line.
The width of the formatted text, taking into account the trailing whitespace at the end of each line.
The height of the formatted text. The height of an empty string is set to the same value as that of the default font.
The initial width given to the layout. It can be either larger or smaller than the text content width, depending on whether the text was wrapped.
Initial height given to the layout. Depending on the length of the text, it may be larger or smaller than the text content height.
The maximum reordering count of any line of text, used to calculate the most number of hit-testing boxes needed. If the layout has no bidirectional text, or no text at all, the minimum level is 1.
Total number of lines.
A value that indicates the left-most point of formatted text relative to the layout box, while excluding any glyph overhang.
Specifies the trimming option for text overflowing the layout box.
-A value that specifies the text granularity used to trim text overflowing the layout box.
A character code used as the delimiter that signals the beginning of the portion of text to be preserved. Text starting from the Nth occurence of the delimiter (where N equals delimiterCount) counting backwards from the end of the text block will be preserved. For example, given the text is a path like c:\A\B\C\D\file.txt and delimiter equal to '\' and delimiterCount equal to 1, the file.txt portion of the text would be preserved. Specifying a delimiterCount of 2 would preserve D\file.txt.
The delimiter count, counting from the end of the text, to preserve text from.
Contains a set of typographic features to be applied during text shaping.
-A reference to a structure that specifies properties used to identify and execute typographic features in the font.
A value that indicates the number of features being applied to a font face.
Contains information about the width, thickness, offset, run height, reading direction, and flow direction of an underline.
-All coordinates are in device independent pixels (DIPs).
-A value that indicates the width of the underline, measured parallel to the baseline.
A value that indicates the thickness of the underline, measured perpendicular to the baseline.
A value that indicates the offset of the underline from the baseline. A positive offset represents a position below the baseline (away from the text) and a negative offset is above (toward the text).
A value that indicates the height of the tallest run where the underline is applied.
A value that indicates the reading direction of the text associated with the underline. This value is used to interpret whether the width value runs horizontally or vertically.
A value that indicates the flow direction of the text associated with the underline. This value is used to interpret whether the thickness value advances top to bottom, left to right, or right to left.
An array of characters which contains the locale of the text that the underline is being drawn under. For example, in vertical text, the underline belongs on the left for Chinese but on the right for Japanese.
The measuring mode can be useful to the renderer to determine how underlines are rendered, such as rounding the thickness to a whole pixel in GDI-compatible modes.
The
The first code point in the Unicode range.
The last code point in the Unicode range.
Specifies the identifiers of the metadata items in an 8BIM IPTC digest metadata block.
-[VT_LPSTR] A name that identifies the 8BIM block.
[VT_BLOB] The embedded IPTC digest value.
Specifies the identifiers of the metadata items in an 8BIM IPTC block.
-[VT_LPSTR] A name that identifies the 8BIM block.
[VT_UNKNOWN] The IPTC block embedded in this 8BIM IPTC block.
Specifies the identifiers of the metadata items in an 8BIMResolutionInfo block.
-[VT_LPSTR] A name that identifies the 8BIM block.
[VT_UI4] The horizontal resolution of the image.
[VT_UI2] The units that the horizontal resolution is specified in; a 1 indicates pixels per inch and a 2 indicates pixels per centimeter.
[VT_UI2] The units that the image width is specified in; a 1 indicates inches, a 2 indicates centimeters, a 3 indicates points, a 4 specifies picas, and a 5 specifies columns.
[VT_UI4] The vertical resolution of the image.
[VT_UI2] The units that the vertical resolution is specified in; a 1 indicates pixels per inch and a 2 indicates pixels per centimeter.
[VT_UI2] The units that the image height is specified in; a 1 indicates inches, a 2 indicates centimeters, a 3 indicates points, a 4 specifies picas, and a 5 specifies columns.
Specifies the desired alpha channel usage.
-Use alpha channel.
Use a pre-multiplied alpha channel.
Ignore alpha channel.
Specifies the desired cache usage.
-The CreateBitmap of the
Do not cache the bitmap.
Cache the bitmap when needed.
Cache the bitmap at initialization.
Specifies the capabilities of the decoder.
-Decoder recognizes the image was encoded with an encoder produced by the same vendor.
Decoder can decode all the images within an image container.
Decoder can decode some of the images within an image container.
Decoder can enumerate the metadata blocks within a container format.
Decoder can find and decode a thumbnail.
Specifies the type of dither algorithm to apply when converting between image formats.
-A solid color algorithm without dither.
A solid color algorithm without dither.
A 4x4 ordered dither algorithm.
An 8x8 ordered dither algorithm.
A 16x16 ordered dither algorithm.
A 4x4 spiral dither algorithm.
An 8x8 spiral dither algorithm.
A 4x4 dual spiral dither algorithm.
An 8x8 dual spiral dither algorithm.
An error diffusion algorithm.
Specifies the cache options available for an encoder.
-The encoder is cached in memory. This option is not supported.
The encoder is cached to a temporary file. This option is not supported.
The encoder is not cached.
Specifies the sampling or filtering mode to use when scaling an image.
-A nearest neighbor interpolation algorithm. Also known as nearest pixel or point interpolation.
The output pixel is assigned the value of the pixel that the point falls within. No other pixels are considered.
A bilinear interpolation algorithm.
The output pixel values are computed as a weighted average of the nearest four pixels in a 2x2 grid.
A bicubic interpolation algorithm.
Destination pixel values are computed as a weighted average of the nearest sixteen pixels in a 4x4 grid.
A Fant resampling algorithm.
Destination pixel values are computed as a weighted average of the all the pixels that map to the new pixel.
A high quality bicubic interpolation algorithm. Destination pixel values are computed using a much denser sampling kernel than regular cubic. The kernel is resized in response to the scale factor, making it suitable for downscaling by factors greater than 2.
Note??This value is supported beginning with Windows?10. ?Specifies access to an
Specifies the type of palette used for an indexed image format.
-An arbitrary custom palette provided by caller.
An optimal palette generated using a median-cut algorithm. Derived from the colors in an image.
A black and white palette.
A palette that has its 8-color on-off primaries and the 16 system colors added. With duplicates removed, 16 colors are available.
A palette that has 3 intensity levels of each primary: 27-color on-off primaries and the 16 system colors added. With duplicates removed, 35 colors are available.
A palette that has 4 intensity levels of each primary: 64-color on-off primaries and the 16 system colors added. With duplicates removed, 72 colors are available.
A palette that has 5 intensity levels of each primary: 125-color on-off primaries and the 16 system colors added. With duplicates removed, 133 colors are available.
A palette that has 6 intensity levels of each primary: 216-color on-off primaries and the 16 system colors added. With duplicates removed, 224 colors are available. This is the same as WICBitmapPaletteFixedHalftoneWeb.
A palette that has 6 intensity levels of each primary: 216-color on-off primaries and the 16 system colors added. With duplicates removed, 224 colors are available. This is the same as
A palette that has its 252-color on-off primaries and the 16 system colors added. With duplicates removed, 256 colors are available.
A palette that has its 256-color on-off primaries and the 16 system colors added. With duplicates removed, 256 colors are available.
A palette that has 4 shades of gray.
A palette that has 16 shades of gray.
A palette that has 256 shades of gray.
Specifies the flip and rotation transforms.
-A rotation of 0 degrees.
A clockwise rotation of 90 degrees.
A clockwise rotation of 180 degrees.
A clockwise rotation of 270 degrees.
A horizontal flip. Pixels are flipped around the vertical y-axis.
A vertical flip. Pixels are flipped around the horizontal x-axis.
Specifies the color context types.
-An uninitialized color context.
A color context that is a full ICC color profile.
A color context that is one of a number of set color spaces (sRGB, AdobeRGB) that are defined in the EXIF specification.
Specifies component enumeration options.
-Enumerate any components that are not disabled. Because this value is 0x0, it is always included with the other options.
Force a read of the registry before enumerating components.
Include disabled components in the enumeration. The set of disabled components is disjoint with the set of default enumerated components
Include unsigned components in the enumeration. This option has no effect.
At the end of component enumeration, filter out any components that are not Windows provided.
Specifies the component signing status.
-A signed component.
An unsigned component
A component is safe.
Components that do not have a binary component to sign, such as a pixel format, should return this value.
A component has been disabled.
Specifies the type of Windows Imaging Component (WIC) component.
-A WIC decoder.
A WIC encoder.
A WIC pixel converter.
A WIC metadata reader.
A WIC metadata writer.
A WIC pixel format.
All WIC components.
Specifies the the meaning of pixel color component values contained in the DDS image.
-Alpha behavior is unspecified and must be determined by the reader.
The alpha data is straight.
The alpha data is premultiplied.
The alpha data is opaque (UNORM value of 1). This can be used by a compliant reader as a performance optimization. For example, blending operations can be converted to copies.
The alpha channel contains custom data that is not alpha.
Specifies the dimension type of the data contained in DDS image.
-Both WICDdsTexture2d and
DDS image contains a 1-dimensional texture .
DDS image contains a 2-dimensional texture .
DDS image contains a 3-dimensional texture .
The DDS image contains a cube texture represented as an array of 6 faces.
Specifies decode options.
-Cache metadata when needed.
Cache metadata when decoder is loaded.
Specifies the application extension metadata properties for a Graphics Interchange Format (GIF) image.
-[VT_UI1 | VT_VECTOR] Indicates a string that identifies the application.
[VT_UI1 | VT_VECTOR] Indicates data that is exposed by the application.
Specifies the comment extension metadata properties for a Graphics Interchange Format (GIF) image.
-[VT_LPSTR] Indicates the comment text.
Specifies the graphic control extension metadata properties that define the transitions between each frame animation for Graphics Interchange Format (GIF) images.
-[VT_UI1] Indicates the disposal requirements. 0 - no disposal, 1 - do not dispose, 2 - restore to background color, 3 - restore to previous.
[VT_BOOL] Indicates the user input flag. TRUE if user input should advance to the next frame; otherwise,
[VT_BOOL] Indicates the transparency flag. TRUE if a transparent color in is in the color table for this frame; otherwise,
[VT_UI2] Indicates how long to display the next frame before advancing to the next frame, in units of 1/100th of a second.
[VT_UI1] Indicates which color in the palette should be treated as transparent.
Specifies the image descriptor metadata properties for Graphics Interchange Format (GIF) frames.
-[VT_UI2] Indicates the X offset at which to locate this frame within the logical screen.
[VT_UI2] Indicates the Y offset at which to locate this frame within the logical screen.
[VT_UI2] Indicates width of this frame, in pixels.
[VT_UI2] Indicates height of this frame, in pixels.
[VT_BOOL] Indicates the local color table flag. TRUE if global color table is present; otherwise,
[VT_BOOL] Indicates the interlace flag. TRUE if image is interlaced; otherwise,
[VT_BOOL] Indicates the sorted color table flag. TRUE if the color table is sorted from most frequently to least frequently used color; otherwise,
[VT_UI1] Indicates the value used to calculate the number of bytes contained in the global color table.
To calculate the actual size of the color table, raise 2 to the value of the field + 1.
Specifies the logical screen descriptor properties for Graphics Interchange Format (GIF) metadata.
-[VT_UI1 | VT_VECTOR] Indicates the signature property.
[VT_UI2] Indicates the width in pixels.
[VT_UI2] Indicates the height in pixels.
[VT_BOOL] Indicates the global color table flag. TRUE if a global color table is present; otherwise,
[VT_UI1] Indicates the color resolution in bits per pixel.
[VT_BOOL] Indicates the sorted color table flag. TRUE if the table is sorted; otherwise,
[VT_UI1] Indicates the value used to calculate the number of bytes contained in the global color table.
To calculate the actual size of the color table, raise 2 to the value of the field + 1.
[VT_UI1] Indicates the index within the color table to use for the background (pixels not defined in the image).
[VT_UI1] Indicates the factor used to compute an approximation of the aspect ratio.
Specifies the JPEG chrominance table property.
-[VT_UI2|VT_VECTOR] Indicates the metadata property is a chrominance table.
Specifies the JPEG comment properties.
-Indicates the metadata property is comment text.
Specifies the options for indexing a JPEG image.
-Index generation is deferred until
Index generation is performed when the when the image is initially loaded.
Specifies the JPEG luminance table property.
-[VT_UI2|VT_VECTOR] Indicates the metadata property is a luminance table.
Specifies the memory layout of pixel data in a JPEG image scan.
-The pixel data is stored in an interleaved memory layout.
The pixel data is stored in a planar memory layout.
The pixel data is stored in a progressive layout.
Specifies conversion matrix from Y'Cb'Cr' to R'G'B'.
-Specifies the identity transfer matrix.
Specifies the BT601 transfer matrix.
Specifies the JPEG YCrCB subsampling options.
-The native JPEG encoder uses
The default subsampling option.
Subsampling option that uses both horizontal and vertical decimation.
Subsampling option that uses horizontal decimation .
Subsampling option that uses no decimation.
Subsampling option that uses 2x vertical downsampling only. This option is only available in Windows?8.1 and later.
Specifies named white balances for raw images.
-The default white balance.
A daylight white balance.
A cloudy white balance.
A shade white balance.
A tungsten white balance.
A fluorescent white balance.
Daylight white balance.
A flash white balance.
A custom white balance. This is typically used when using a picture (grey-card) as white balance.
An automatic balance.
An "as shot" white balance.
Specifies additional options to an
Specifies the Portable Network Graphics (PNG) background (bKGD) chunk metadata properties.
-Indicates the background color. There are three possible types, depending on the image's pixel format.
Specifies the index of the background color in an image with an indexed pixel format.
Specifies the background color in a grayscale image.
Specifies the background color in an RGB image as three USHORT values: {0xRRRR, 0xGGGG, 0xBBBB}.
Specifies the Portable Network Graphics (PNG) cHRM chunk metadata properties for CIE XYZ chromaticity.
-[VT_UI4] Indicates the whitepoint x value ratio.
[VT_UI4] Indicates the whitepoint y value ratio.
[VT_UI4] Indicates the red x value ratio.
[VT_UI4] Indicates the red y value ratio.
[VT_UI4] Indicates the green x value ratio.
[VT_UI4] Indicates the green y value ratio.
[VT_UI4] Indicates the blue x value ratio.
[VT_UI4] Indicates the blue y value ratio.
Specifies the Portable Network Graphics (PNG) filters available for compression optimization.
-Indicates an unspecified PNG filter. This enables WIC to algorithmically choose the best filtering option for the image.
Indicates no PNG filter.
Indicates a PNG sub filter.
Indicates a PNG up filter.
Indicates a PNG average filter.
Indicates a PNG paeth filter.
Indicates a PNG adaptive filter. This enables WIC to choose the best filtering mode on a per-scanline basis.
Specifies the Portable Network Graphics (PNG) gAMA chunk metadata properties.
-[VT_UI4] Indicates the gamma value.
Specifies the Portable Network Graphics (PNG) hIST chunk metadata properties.
-[VT_VECTOR | VT_UI2] Indicates the approximate usage frequency of each color in the color palette.
Specifies the Portable Network Graphics (PNG) iCCP chunk metadata properties.
-[VT_LPSTR] Indicates the International Color Consortium (ICC) profile name.
[VT_VECTOR | VT_UI1] Indicates the embedded ICC profile.
Specifies the Portable Network Graphics (PNG) iTXT chunk metadata properties.
-[VT_LPSTR] Indicates the keywords in the iTXT metadata chunk.
[VT_UI1] Indicates whether the text in the iTXT chunk is compressed. 1 if the text is compressed; otherwise, 0.
[VT_LPSTR] Indicates the human language used by the translated keyword and the text.
[VT_LPWSTR] Indicates a translation of the keyword into the language indicated by the language tag.
[VT_LPWSTR] Indicates additional text in the iTXT metadata chunk.
Specifies the Portable Network Graphics (PNG) sRGB chunk metadata properties.
-[VT_UI1] Indicates the rendering intent for an sRGB color space image. The rendering intents have the following meaning.
Value | Meaning |
---|---|
0 | Perceptual |
1 | Relative colorimetric |
2 | Saturation |
3 | Absolute colorimetric |
?
Specifies the Portable Network Graphics (PNG) tIME chunk metadata properties.
-[VT_UI2] Indicates the year of the last modification.
[VT_UI1] Indicates the month of the last modification.
[VT_UI1] Indicates day of the last modification.
[VT_UI1] Indicates the hour of the last modification.
[VT_UI1] Indicates the minute of the last modification.
[VT_UI1] Indicates the second of the last modification.
Specifies when the progress notification callback should be called.
-The callback should be called when codec operations begin.
The callback should be called when codec operations end.
The callback should be called frequently to report status.
The callback should be called on all available progress notifications.
Specifies the progress operations to receive notifications for.
-Receive copy pixel operation.
Receive write pixel operation.
Receive all progress operations available.
Specifies the capability support of a raw image.
-The capability is not supported.
The capability supports only get operations.
The capability supports get and set operations.
Specifies the parameter set used by a raw codec.
-An as shot parameter set.
A user adjusted parameter set.
A codec adjusted parameter set.
Specifies the render intent of the next CopyPixels call.
-Specifies the rotation capabilities of the codec.
-Rotation is not supported.
Set operations for rotation is not supported.
90 degree rotations are supported.
All rotation angles are supported.
Specifies the access level of a Windows Graphics Device Interface (GDI) section.
-Indicates a read only access level.
Indicates a read/write access level.
Specifies the Tagged Image File Format (TIFF) compression options.
-Indicates a suitable compression algorithm based on the image and pixel format.
Indicates no compression.
Indicates a CCITT3 compression algorithm. This algorithm is only valid for 1bpp pixel formats.
Indicates a CCITT4 compression algorithm. This algorithm is only valid for 1bpp pixel formats.
Indicates a LZW compression algorithm.
Indicates a RLE compression algorithm. This algorithm is only valid for 1bpp pixel formats.
Indicates a ZIP compression algorithm.
Indicates an LZWH differencing algorithm.
Defines methods that add the concept of writeability and static in-memory representations of bitmaps to
Because of to the internal memory representation implied by the
Provides access for palette modifications.
-Provides access to a rectangular area of the bitmap.
-The rectangle to be accessed.
The access mode you wish to obtain for the lock. This is a bitwise combination of
Value | Meaning |
---|---|
The read access lock. | |
The write access lock. |
?
A reference that receives the locked memory location.
Locks are exclusive for writing but can be shared for reading. You cannot call CopyPixels while the
Provides access for palette modifications.
-The palette to use for conversion.
If this method succeeds, it returns
Changes the physical resolution of the image.
-The horizontal resolution.
The vertical resolution.
If this method succeeds, it returns
This method has no effect on the actual pixels or samples stored in the bitmap. Instead the interpretation of the sampling rate is modified. This means that a 96 DPI image which is 96 pixels wide is one inch. If the physical resolution is modified to 48 DPI, then the bitmap is considered to be 2 inches wide but has the same number of pixels. If the resolution is less than REAL_EPSILON (1.192092896e-07F) the error code
Provides access to a rectangular area of the bitmap.
-The access mode you wish to obtain for the lock. This is a bitwise combination of
Value | Meaning |
---|---|
The read access lock. | |
The write access lock. |
?
A reference that receives the locked memory location.
Locks are exclusive for writing but can be shared for reading. You cannot call CopyPixels while the
Provides access to a rectangular area of the bitmap.
-The rectangle to be accessed.
The access mode you wish to obtain for the lock. This is a bitwise combination of
Value | Meaning |
---|---|
The read access lock. | |
The write access lock. |
?
A reference that receives the locked memory location.
Locks are exclusive for writing but can be shared for reading. You cannot call CopyPixels while the
Exposes methods that produce a clipped version of the input bitmap for a specified rectangular region of interest.
-Initializes the bitmap clipper with the provided parameters.
-he input bitmap source.
The rectangle of the bitmap source to clip.
If this method succeeds, it returns
Initializes the bitmap clipper with the provided parameters.
-he input bitmap source.
The rectangle of the bitmap source to clip.
If this method succeeds, it returns
Exposes methods that provide information about a particular codec.
-Proxy function for the GetContainerFormat method.
-Proxy function for the DoesSupportAnimation method.
-Retrieves a value indicating whether the codec supports chromakeys.
-Retrieves a value indicating whether the codec supports lossless formats.
-Retrieves a value indicating whether the codec supports multi frame images.
-Proxy function for the GetContainerFormat method.
-If this function succeeds, it returns
Retrieves the pixel formats the codec supports.
-The size of the pguidPixelFormats array. Use 0
on first call to determine the needed array size.
Receives the supported pixel formats. Use
on first call to determine needed array size.
The array size needed to retrieve all supported pixel formats.
If this method succeeds, it returns
The usage pattern for this method is a two call process. The first call retrieves the array size needed to retrieve all the supported pixel formats by calling it with cFormats set to 0
and pguidPixelFormats set to
. This call sets pcActual to the array size needed. Once the needed array size is determined, a second GetPixelFormats call with pguidPixelFormats set to an array of the appropriate size will retrieve the pixel formats.
Retrieves the color manangement version number the codec supports.
-The size of the version buffer. Use 0
on first call to determine needed buffer size.
Receives the color management version number. Use
on first call to determine needed buffer size.
The actual buffer size needed to retrieve the full color management version number.
If this method succeeds, it returns
The usage pattern for this method is a two call process. The first call retrieves the buffer size needed to retrieve the full color management version number by calling it with cchColorManagementVersion set to 0
and wzColorManagementVersion set to
. This call sets pcchActual to the buffer size needed. Once the needed buffer size is determined, a second GetColorManagementVersion call with cchColorManagementVersion set to the buffer size and wzColorManagementVersion set to a buffer of the appropriate size will retrieve the pixel formats.
Retrieves the name of the device manufacture associated with the codec.
-The size of the device manufacture's name. Use 0
on first call to determine needed buffer size.
Receives the device manufacture's name. Use
on first call to determine needed buffer size.
The actual buffer size needed to retrieve the device manufacture's name.
If this method succeeds, it returns
The usage pattern for this method is a two call process. The first call retrieves the buffer size needed to retrieve the full color management version number by calling it with cchDeviceManufacturer set to 0
and wzDeviceManufacturer set to
. This call sets pcchActual to the buffer size needed. Once the needed buffer size is determined, a second GetDeviceManufacturer call with cchDeviceManufacturer set to the buffer size and wzDeviceManufacturer set to a buffer of the appropriate size will retrieve the pixel formats.
Retrieves a comma delimited list of device models associated with the codec.
-The size of the device models buffer. Use 0
on first call to determine needed buffer size.
Receives a comma delimited list of device model names associated with the codec. Use
on first call to determine needed buffer size.
The actual buffer size needed to retrieve all of the device model names.
If this method succeeds, it returns
The usage pattern for this method is a two call process. The first call retrieves the buffer size needed to retrieve the full color management version number by calling it with cchDeviceModels set to 0
and wzDeviceModels set to
. This call sets pcchActual to the buffer size needed. Once the needed buffer size is determined, a second GetDeviceModels call with cchDeviceModels set to the buffer size and wzDeviceModels set to a buffer of the appropriate size will retrieve the pixel formats.
Proxy function for the GetMimeTypes method.
-If this function succeeds, it returns
Retrieves a comma delimited list of the file name extensions associated with the codec.
-The size of the file name extension buffer. Use 0
on first call to determine needed buffer size.
Receives a comma delimited list of file name extensions associated with the codec. Use
on first call to determine needed buffer size.
The actual buffer size needed to retrieve all file name extensions associated with the codec.
If this method succeeds, it returns
The default extension for an image encoder is the first item in the list of returned extensions.
The usage pattern for this method is a two call process. The first call retrieves the buffer size needed to retrieve the full color management version number by calling it with cchFileExtensions set to 0
and wzFileExtensions set to
. This call sets pcchActual to the buffer size needed. Once the needed buffer size is determined, a second GetFileExtensions call with cchFileExtensions set to the buffer size and wzFileExtensions set to a buffer of the appropriate size will retrieve the pixel formats.
Proxy function for the DoesSupportAnimation method.
-If this function succeeds, it returns
Retrieves a value indicating whether the codec supports chromakeys.
-Receives TRUE if the codec supports chromakeys; otherwise,
If this method succeeds, it returns
Retrieves a value indicating whether the codec supports lossless formats.
-Receives TRUE if the codec supports lossless formats; otherwise,
If this method succeeds, it returns
Retrieves a value indicating whether the codec supports multi frame images.
-Receives TRUE if the codec supports multi frame images; otherwise,
If this method succeeds, it returns
Retrieves a value indicating whether the given mime type matches the mime type of the codec.
-The mime type to compare.
Receives TRUE if the mime types match; otherwise,
Registers a progress notification callback function.
-A function reference to the application defined progress notification callback function. See ProgressNotificationCallback for the callback signature.
A reference to component data for the callback method.
The
If this method succeeds, it returns
Applications can only register a single callback. Subsequent registration calls will replace the previously registered callback. To unregister a callback, pass in
Progress is reported in an increasing order between 0.0 and 1.0. If dwProgressFlags includes
Exposes methods that represent a decoder.
The interface provides access to the decoder's properties such as global thumbnails (if supported), frames, and palette.
-There are a number of concrete implemenations of this interface representing each of the standard decoders provided by the platform including bitmap (BMP), Portable Network Graphics (PNG), icon (ICO), Joint Photographic Experts Group (JPEG), Graphics Interchange Format (GIF), Tagged Image File Format (TIFF), and Microsoft?Windows Digital Photo (WDP). The following table includes the class identifier (CLSID) for each native decoder.
CLSID Name | CLSID |
---|---|
0x6b462062, 0x7cbf, 0x400d, 0x9f, 0xdb, 0x81, 0x3d, 0xd1, 0xf, 0x27, 0x78 | |
0x389ea17b, 0x5078, 0x4cde, 0xb6, 0xef, 0x25, 0xc1, 0x51, 0x75, 0xc7, 0x51 | |
0xc61bfcdf, 0x2e0f, 0x4aad, 0xa8, 0xd7, 0xe0, 0x6b, 0xaf, 0xeb, 0xcd, 0xfe | |
0x9456a480, 0xe88b, 0x43ea, 0x9e, 0x73, 0xb, 0x2d, 0x9b, 0x71, 0xb1, 0xca | |
0x381dda3c, 0x9ce9, 0x4834, 0xa2, 0x3e, 0x1f, 0x98, 0xf8, 0xfc, 0x52, 0xbe | |
0xb54e85d9, 0xfe23, 0x499f, 0x8b, 0x88, 0x6a, 0xce, 0xa7, 0x13, 0x75, 0x2b | |
0xa26cec36, 0x234c, 0x4950, 0xae, 0x16, 0xe3, 0x4a, 0xac, 0xe7, 0x1d, 0x0d |
?
This interface may be sub-classed to provide support for third party codecs as part of the extensibility model. See the AITCodec Sample CODEC.
Codecs written as TIFF container formats that are not register will decode as a TIFF image. Client applications should check for a zero frame count to determine if the codec is valid.
-Retrieves the image's container format.
-Retrieves an
Proxy function for the GetMetadataQueryReader method.
-Retrieves a preview image, if supported.
-Not all formats support previews. Only the native Microsoft?Windows Digital Photo (WDP) codec support previews.
-Proxy function for the GetThumbnail method.
-Retrieves the total number of frames in the image.
-Retrieves the capabilities of the decoder based on the specified stream.
-The stream to retrieve the decoder capabilities from.
The
Custom decoder implementations should save the current position of the specified
Initializes the decoder with the provided stream.
-The stream to use for initialization.
The stream contains the encoded pixels which are decoded each time the CopyPixels method on the
The
If this method succeeds, it returns
Retrieves the image's container format.
-A reference that receives the image's container format
If this method succeeds, it returns
Retrieves an
If this method succeeds, it returns
Proxy function for the CopyPalette method.
-If this function succeeds, it returns
Proxy function for the GetMetadataQueryReader method.
-If this function succeeds, it returns
Retrieves a preview image, if supported.
-Receives a reference to the preview bitmap if supported.
If this method succeeds, it returns
Not all formats support previews. Only the native Microsoft?Windows Digital Photo (WDP) codec support previews.
-Proxy function for the GetColorContexts method.
-If this function succeeds, it returns
Proxy function for the GetColorContexts method.
-If this function succeeds, it returns
Proxy function for the GetColorContexts method.
-If this function succeeds, it returns
Proxy function for the GetThumbnail method.
-If this function succeeds, it returns
Retrieves the total number of frames in the image.
-A reference that receives the total number of frames in the image.
If this method succeeds, it returns
Retrieves the specified frame of the image.
-The particular frame to retrieve.
A reference that receives a reference to the
Exposes methods that provide information about a decoder.
-Retrieves the file pattern signatures supported by the decoder.
-The array size of the pPatterns array.
Receives a list of
Receives the number of patterns the decoder supports.
Receives the actual buffer size needed to retrieve all pattern signatures supported by the decoder.
If this method succeeds, it returns
To retrieve all pattern signatures, this method should first be called with pPatterns set to
to retrieve the actual buffer size needed through pcbPatternsActual. Once the needed buffer size is known, allocate a buffer of the needed size and call GetPatterns again with the allocated buffer.
Retrieves a value that indicates whether the codec recognizes the pattern within a specified stream.
-The stream to pattern match within.
A reference that receives TRUE if the patterns match; otherwise,
Creates a new
If this method succeeds, it returns
Defines methods for setting an encoder's properties such as thumbnails, frames, and palettes.
-There are a number of concrete implemenations of this interface representing each of the standard encoders provided by the platform including bitmap (BMP), Portable Network Graphics (PNG), Joint Photographic Experts Group (JPEG), Graphics Interchange Format (GIF), Tagged Image File Format (TIFF), and Microsoft?Windows Digital Photo (WDP). The following table includes the class identifier (CLSID) for each native encoder.
CLSID Name | CLSID |
---|---|
0x69be8bb4, 0xd66d, 0x47c8, 0x86, 0x5a, 0xed, 0x15, 0x89, 0x43, 0x37, 0x82 | |
0x27949969, 0x876a, 0x41d7, 0x94, 0x47, 0x56, 0x8f, 0x6a, 0x35, 0xa4, 0xdc | |
0x1a34f5c1, 0x4a5a, 0x46dc, 0xb6, 0x44, 0x1f, 0x45, 0x67, 0xe7, 0xa6, 0x76 | |
0x114f5598, 0xb22, 0x40a0, 0x86, 0xa1, 0xc8, 0x3e, 0xa4, 0x95, 0xad, 0xbd | |
0x0131be10, 0x2001, 0x4c5f, 0xa9, 0xb0, 0xcc, 0x88, 0xfa, 0xb6, 0x4c, 0xe8 | |
0xac4ce3cb, 0xe1c1, 0x44cd, 0x82, 0x15, 0x5a, 0x16, 0x65, 0x50, 0x9e, 0xc2 |
?
Additionally this interface may be sub-classed to provide support for third party codecs as part of the extensibility model. See the AITCodec Sample CODEC.
-Retrieves the encoder's container format.
-Retrieves an
Proxy function for the SetPalette method.
-Sets the global thumbnail for the image.
-Sets the global preview for the image.
-Proxy function for the GetMetadataQueryWriter method.
-Initializes the encoder with an
If this method succeeds, it returns
Retrieves the encoder's container format.
-A reference that receives the encoder's container format
If this method succeeds, it returns
Retrieves an
If this method succeeds, it returns
Sets the
If this method succeeds, it returns
Sets the
If this method succeeds, it returns
Sets the
If this method succeeds, it returns
Proxy function for the SetPalette method.
-If this function succeeds, it returns
Sets the global thumbnail for the image.
-The
Returns
Returns
Sets the global preview for the image.
-The
Returns
Returns
Creates a new
If this method succeeds, it returns
The parameter ppIEncoderOptions can be used to receive an
Otherwise, you can pass
See Encoding Overview for an example of how to set encoder options.
For formats that support encoding multiple frames (for example, TIFF, JPEG-XR), you can work on only one frame at a time. This means that you must call
Commits all changes for the image and closes the stream.
-If this method succeeds, it returns
To finalize an image, both the frame Commit and the encoder Commit must be called. However, only call the encoder Commit method after all frames have been committed.
After the encoder has been committed, it can't be re-initialized or reused with another stream. A new encoder interface must be created, for example, with
For the encoder Commit to succeed, you must at a minimum call
Proxy function for the GetMetadataQueryWriter method.
-If this function succeeds, it returns
Exposes methods that provide information about an encoder.
-Creates a new
If this method succeeds, it returns
Exposes methods that produce a flipped (horizontal or vertical) and/or rotated (by 90 degree increments) bitmap source. Rotations are done before the flip.
-IWICBitmapFipRotator requests data on a per-pixel basis, while WIC codecs provide data on a per-scanline basis. This causes the fliprotator object to exhibit n? behavior if there is no buffering. This occures because each pixel in the transformed image requires an entire scanline to be decoded in the file. It is recommended that you buffer the image using
Initializes the bitmap flip rotator with the provided parameters.
-The input bitmap source.
The
If this method succeeds, it returns
Defines methods for decoding individual image frames of an encoded file.
-Retrieves a metadata query reader for the frame.
-For image formats with one frame (JPG, PNG, JPEG-XR), the frame-level query reader of the first frame is used to access all image metadata, and the decoder-level query reader isn?t used. For formats with more than one frame (GIF, TIFF), the frame-level query reader for a given frame is used to access metadata specific to that frame, and in the case of GIF a decoder-level metadata reader will be present. If the decoder doesn?t support metadata (BMP, ICO), this will return
Retrieves a small preview of the frame, if supported by the codec.
-Not all formats support thumbnails. Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), and Microsoft?Windows Digital Photo (WDP) support thumbnails.
-Retrieves a metadata query reader for the frame.
-When this method returns, contains a reference to the frame's metadata query reader.
If this method succeeds, it returns
For image formats with one frame (JPG, PNG, JPEG-XR), the frame-level query reader of the first frame is used to access all image metadata, and the decoder-level query reader isn?t used. For formats with more than one frame (GIF, TIFF), the frame-level query reader for a given frame is used to access metadata specific to that frame, and in the case of GIF a decoder-level metadata reader will be present. If the decoder doesn?t support metadata (BMP, ICO), this will return
Retrieves the
If this method succeeds, it returns
If
The ppIColorContexts array must be filled with valid data: each
Retrieves the
If this method succeeds, it returns
If
The ppIColorContexts array must be filled with valid data: each
Retrieves the
If this method succeeds, it returns
If
The ppIColorContexts array must be filled with valid data: each
Retrieves a small preview of the frame, if supported by the codec.
-A reference that receives a reference to the
If this method succeeds, it returns
Not all formats support thumbnails. Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), and Microsoft?Windows Digital Photo (WDP) support thumbnails.
-Represents an encoder's individual image frames.
-Sets the
This method doesn't fail if called on a frame whose pixel format is set to a non-indexed pixel format. If the target pixel format is a non-indexed format, the palette will be ignored.
If you already called
The palette must be specified before your first call to WritePixels/WriteSource. Doing so will cause WriteSource to use the specified palette when converting the source image to the encoder pixel format. If no palette is specified, a palette will be generated on the first call to WriteSource. -
-Proxy function for the SetThumbnail method.
-Gets the metadata query writer for the encoder frame.
-If you are setting metadata on the frame, you must do this before you use
Initializes the frame encoder using the given properties.
-The set of properties to use for
If this method succeeds, it returns
If you don't want any encoding options, pass
For a complete list of encoding options supported by the Windows-provided codecs, see Native WIC Codecs.
-Sets the output image dimensions for the frame.
-The width of the output image.
The height of the output image.
If this method succeeds, it returns
Sets the physical resolution of the output image.
-The horizontal resolution value.
The vertical resolution value.
If this method succeeds, it returns
Windows Imaging Component (WIC) doesn't perform any special processing as a result of DPI resolution values. For example, data returned from
Requests that the encoder use the specified pixel format.
-On input, the requested pixel format
Possible return values include the following.
Return code | Description |
---|---|
| Success. |
| The |
?
The encoder might not support the requested pixel format. If not, SetPixelFormat returns the closest match in the memory block that pPixelFormat points to. If the returned pixel format doesn't match the requested format, you must use an
Proxy function for the SetColorContexts method.
-If this function succeeds, it returns
Proxy function for the SetColorContexts method.
-If this function succeeds, it returns
Proxy function for the SetColorContexts method.
-If this function succeeds, it returns
Sets the
If this method succeeds, it returns
This method doesn't fail if called on a frame whose pixel format is set to a non-indexed pixel format. If the target pixel format is a non-indexed format, the palette will be ignored.
If you already called
The palette must be specified before your first call to WritePixels/WriteSource. Doing so will cause WriteSource to use the specified palette when converting the source image to the encoder pixel format. If no palette is specified, a palette will be generated on the first call to WriteSource. -
-Proxy function for the SetThumbnail method.
-If this function succeeds, it returns
Copies scan-line data from a caller-supplied buffer to the
Possible return values include the following.
Return code | Description |
---|---|
| Success. |
| The value of lineCount is larger than the number of scan lines in the image. |
?
Successive WritePixels calls are assumed to be sequential scan-line access in the output image.
-Encodes a bitmap source.
-The bitmap source to encode.
The size rectangle of the bitmap source.
If this method succeeds, it returns
If SetSize is not called prior to calling WriteSource, the size given in prc is used if not
If SetPixelFormat is not called prior to calling WriteSource, the pixel format of the
If SetResolution is not called prior to calling WriteSource, the pixel format of pIBitmapSource is used.
If SetPalette is not called prior to calling WriteSource, the target pixel format is indexed, and the pixel format of pIBitmapSource matches the encoder frame's pixel format, then the pIBitmapSource pixel format is used.
When encoding a GIF image, if the global palette is set and the frame level palette is not set directly by the user or by a custom independent software vendor (ISV) GIF codec, WriteSource will use the global palette to encode the frame even when pIBitmapSource has a frame level palette.
Starting with Windows?Vista, repeated WriteSource calls can be made as long as the total accumulated source rect height is the same as set through SetSize.
Starting with Windows?8.1, the source rect must be at least the dimensions set through SetSize. If the source rect width exceeds the SetSize width, extra pixels on the right side are ignored. If the source rect height exceeds the remaining unfilled height, extra scan lines on the bottom are ignored. -
-Commits the frame to the image.
-If this method succeeds, it returns
After the frame Commit has been called, you can't use or reinitialize the
To finalize the image, both the frame Commit and the encoder Commit must be called. However, only call the encoder Commit method after all frames have been committed.
-Gets the metadata query writer for the encoder frame.
-When this method returns, contains a reference to metadata query writer for the encoder frame.
If this method succeeds, it returns
If you are setting metadata on the frame, you must do this before you use
Encodes the frame scanlines.
-The number of lines to encode.
Successive WritePixels calls are assumed to be sequential scanline access in the output image.
-Encodes the frame scanlines.
-The number of lines to encode.
Successive WritePixels calls are assumed to be sequential scanline access in the output image.
-Encodes the frame scanlines.
-The number of lines to encode.
The stride of the image pixels.
A reference to the pixel buffer.
Successive WritePixels calls are assumed to be sequential scanline access in the output image.
-Encodes a bitmap source.
-The bitmap source to encode.
If SetSize is not called prior to calling WriteSource, the size given in prc is used if not
If SetPixelFormat is not called prior to calling WriteSource, the pixel format of the
If SetResolution is not called prior to calling WriteSource, the pixel format of pIBitmapSource is used.
If SetPalette is not called prior to calling WriteSource, the target pixel format is indexed, and the pixel format of pIBitmapSource matches the encoder frame's pixel format, then the pIBitmapSource pixel format is used.
When encoding a GIF image, if the global palette is set and the frame level palette is not set directly by the user or by a custom independent software vendor (ISV) GIF codec, WriteSource will use the global palette to encode the frame even when pIBitmapSource has a frame level palette.
Windows Vista:The source rect width must match the width set through SetSize. Repeated WriteSource calls can be made as long as the total accumulated source rect height is the same as set through SetSize.
-Encodes a bitmap source.
-The bitmap source to encode.
The size rectangle of the bitmap source.
If SetSize is not called prior to calling WriteSource, the size given in prc is used if not
If SetPixelFormat is not called prior to calling WriteSource, the pixel format of the
If SetResolution is not called prior to calling WriteSource, the pixel format of pIBitmapSource is used.
If SetPalette is not called prior to calling WriteSource, the target pixel format is indexed, and the pixel format of pIBitmapSource matches the encoder frame's pixel format, then the pIBitmapSource pixel format is used.
When encoding a GIF image, if the global palette is set and the frame level palette is not set directly by the user or by a custom independent software vendor (ISV) GIF codec, WriteSource will use the global palette to encode the frame even when pIBitmapSource has a frame level palette.
Windows Vista:The source rect width must match the width set through SetSize. Repeated WriteSource calls can be made as long as the total accumulated source rect height is the same as set through SetSize.
-Exposes methods that support the Lock method.
-The bitmap lock is simply an abstraction for a rectangular memory window into the bitmap. For the simplest case, a system memory bitmap, this is simply a reference to the top left corner of the rectangle and a stride value.
To release the exclusive lock set by Lock method and the associated
Provides access to the stride value for the memory.
- Note the stride value is specific to the
Gets the pixel format of for the locked area of pixels. This can be used to compute the number of bytes-per-pixel in the locked area.
-Retrieves the width and height, in pixels, of the locked rectangle.
-A reference that receives the width of the locked rectangle.
A reference that receives the height of the locked rectangle.
If this method succeeds, it returns
Provides access to the stride value for the memory.
-If this method succeeds, it returns
Note the stride value is specific to the
Gets the reference to the top left pixel in the locked rectangle.
-A reference that receives the size of the buffer.
A reference that receives a reference to the top left pixel in the locked rectangle.
The reference provided by this method should not be used outside of the lifetime of the lock itself.
GetDataPointer is not available in multi-threaded apartment applications.
-Gets the pixel format of for the locked area of pixels. This can be used to compute the number of bytes-per-pixel in the locked area.
-A reference that receives the pixel format
If this method succeeds, it returns
Represents a resized version of the input bitmap using a resampling or filtering algorithm.
-Images can be scaled to larger sizes; however, even with sophisticated scaling algorithms, there is only so much information in the image and artifacts tend to worsen the more you scale up.
The scaler will reapply the resampling algorithm every time CopyPixels is called. If the scaled image is to be animated, the scaled image should be created once and cached in a new bitmap, after which the
The scaler is optimized to use the minimum amount of memory required to scale the image correctly. The scaler may be used to produce parts of the image incrementally (banding) by calling CopyPixels with different rectangles representing the output bands of the image. Resampling typically requires overlapping rectangles from the source image and thus may need to request the same pixels from the source bitmap multiple times. Requesting scanlines out-of-order from some image decoders can have a significant performance penalty. Because of this reason, the scaler is optimized to handle consecutive horizontal bands of scanlines (rectangle width equal to the bitmap width). In this case the accumulator from the previous vertically adjacent rectangle is re-used to avoid duplicate scanline requests from the source. This implies that banded output from the scaler may have better performance if the bands are requested sequentially. Of course if the scaler is simply used to produce a single rectangle output, this concern is eliminated because the scaler will internally request scanlines in the correct order.
-Initializes the bitmap scaler with the provided parameters.
-The input bitmap source.
The destination width.
The desination height.
The
If this method succeeds, it returns
Exposes methods that refers to a source from which pixels are retrieved, but cannot be written back to.
-This interface provides a common way of accessing and linking together bitmaps, decoders, format converters, and scalers. Components that implement this interface can be connected together in a graph to pull imaging data through.
This interface defines only the notion of readability or being able to produce pixels. Modifying or writing to a bitmap is considered to be a specialization specific to bitmaps which have storage and is defined in the descendant interface
Retrieves the pixel format of the bitmap source..
-The pixel format returned by this method is not necessarily the pixel format the image is stored as. The codec may perform a format conversion from the storage pixel format to an output pixel format.
-Retrieves the pixel width and height of the bitmap.
-A reference that receives the pixel width of the bitmap.
A reference that receives the pixel height of the bitmap
If this method succeeds, it returns
Retrieves the pixel format of the bitmap source..
-Receives the pixel format
If this method succeeds, it returns
The pixel format returned by this method is not necessarily the pixel format the image is stored as. The codec may perform a format conversion from the storage pixel format to an output pixel format.
-Retrieves the sampling rate between pixels and physical world measurements.
-A reference that receives the x-axis dpi resolution.
A reference that receives the y-axis dpi resolution.
If this method succeeds, it returns
Some formats, such as GIF and ICO, do not have full DPI support. For GIF, this method calculates the DPI values from the aspect ratio, using a base DPI of (96.0, 96.0). The ICO format does not support DPI at all, and the method always returns (96.0,96.0) for ICO images.
Additionally, WIC itself does not transform images based on the DPI values in an image. It is up to the caller to transform an image based on the resolution returned.
-Retrieves the color table for indexed pixel formats.
-An
Returns one of the following values.
Return code | Description |
---|---|
| The palette was unavailable. |
| The palette was successfully copied. |
?
If the
Instructs the object to produce pixels.
-The rectangle to copy. A
The stride of the bitmap
The size of the buffer.
A reference to the buffer.
If this method succeeds, it returns
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
Retrieves the pixel width and height of the bitmap.
-Instructs the object to produce pixels.
-The rectangle to copy. A
The stride of the bitmap
A reference to the buffer.
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
-Instructs the object to produce pixels.
-The stride of the bitmap
A reference to the buffer.
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
-Instructs the object to produce pixels.
-The stride of the bitmap
A reference to the buffer.
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
-Instructs the object to produce pixels.
-The rectangle to copy. A
If this method succeeds, it returns
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
Instructs the object to produce pixels.
-If this method succeeds, it returns
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
-Instructs the object to produce pixels.
-If this method succeeds, it returns
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
Instructs the object to produce pixels.
-If this method succeeds, it returns
CopyPixels is one of the two main image processing routines (the other being Lock) triggering the actual processing. It instructs the object to produce pixels according to its algorithm - this may involve decoding a portion of a JPEG stored on disk, copying a block of memory, or even analytically computing a complex gradient. The algorithm is completely dependent on the object implementing the interface.
The caller can restrict the operation to a rectangle of interest (ROI) using the prc parameter. The ROI sub-rectangle must be fully contained in the bounds of the bitmap. Specifying a
The caller controls the memory management and must provide an output buffer (pbBuffer) for the results of the copy along with the buffer's bounds (cbBufferSize). The cbStride parameter defines the count of bytes between two vertically adjacent pixels in the output buffer. The caller must ensure that there is sufficient buffer to complete the call based on the width, height and pixel format of the bitmap and the sub-rectangle provided to the copy method.
If the caller needs to perform numerous copies of an expensive
The callee must only write to the first (prc->Width*bitsperpixel+7)/8 bytes of each line of the output buffer (in this case, a line is a consecutive string of cbStride bytes).
Copies pixel data using the supplied input parameters.
-The rectangle of pixels to copy.
The width to scale the source bitmap. This parameter must equal the value obtainable through
The height to scale the source bitmap. This parameter must equal the value obtainable through
The
This
The desired rotation or flip to perform prior to the pixel copy.
The transform must be an operation supported by an DoesSupportTransform call.
If a dstTransform is specified, nStride is the transformed stride and is based on the pguidDstFormat pixel format, not the original source's pixel format.
The stride of the destination buffer.
The size of the destination buffer.
The output buffer.
If this method succeeds, it returns
Returns the closest dimensions the implementation can natively scale to given the desired dimensions.
-The desired width. A reference that receives the closest supported width.
The desired height. A reference that receives the closest supported height.
If this method succeeds, it returns
The Windows provided codecs provide the following support for native scaling: -
Retrieves the closest pixel format to which the implementation of
If this method succeeds, it returns
The Windows provided codecs provide the following support:
Determines whether a specific transform option is supported natively by the implementation of the
If this method succeeds, it returns
The Windows provided codecs provide the following level of support:
Exposes methods for color management.
-A Color Context is an abstraction for a color profile. The profile can either be loaded from a file (like "sRGB Color Space Profile.icm"), read from a memory buffer, or can be defined by an EXIF color space. The system color profile directory can be obtained by calling GetColorDirectory.
Once a color context has been initialized, it cannot be re-initialized.
-Retrieves the color context type.
-Retrieves the Exchangeable Image File (EXIF) color space color context.
-This method should only be used when
Initializes the color context from the given file.
-The name of the file.
If this method succeeds, it returns
Once a color context has been initialized, it can't be re-initialized. -
-Initializes the color context from a memory block.
-The buffer used to initialize the
The size of the pbBuffer buffer.
If this method succeeds, it returns
Once a color context has been initialized, it can't be re-initialized. -
-Initializes the color context using an Exchangeable Image File (EXIF) color space.
-The value of the EXIF color space.
Value | Meaning |
---|---|
| A sRGB color space. |
| An Adobe RGB color space. |
?
If this method succeeds, it returns
Once a color context has been initialized, it can't be re-initialized. -
-Retrieves the color context type.
-A reference that receives the
If this method succeeds, it returns
Retrieves the color context profile.
-The size of the pbBuffer buffer.
A reference that receives the color context profile.
A reference that receives the actual buffer size needed to retrieve the entire color context profile.
If this method succeeds, it returns
Only use this method if the context type is
Calling this method with pbBuffer set to
Retrieves the Exchangeable Image File (EXIF) color space color context.
-A reference that receives the EXIF color space color context.
Value | Meaning |
---|---|
| A sRGB color space. |
| An Adobe RGB color space. |
| Unused. |
?
If this method succeeds, it returns
This method should only be used when
Exposes methods that transforms an
A
Once initialized, a color transform cannot be reinitialized. Because of this, a color transform cannot be used with multiple sources or varying parameters.
-Initializes an
If this method succeeds, it returns
The currently supported formats for the pIContextSource and pixelFmtDest parameters are: -
In order to get correct behavior from a color transform, the input and output pixel formats must be compatible with the source and destination color profiles. For example, an sRGB destination color profile will produce incorrect results when used with a CMYK destination pixel format.
-Exposes methods that provide component information.
-Retrieves the component's
Proxy function for the GetCLSID method.
-Retrieves the signing status of the component.
-Signing is unused by WIC. Therefore, all components
This function can be used to determine whether a component has no binary component or has been added to the disabled components list in the registry.
-Retrieves the vendor
Retrieves the component's
If this method succeeds, it returns
Proxy function for the GetCLSID method.
-If this function succeeds, it returns
Retrieves the signing status of the component.
-A reference that receives the
If this method succeeds, it returns
Signing is unused by WIC. Therefore, all components
This function can be used to determine whether a component has no binary component or has been added to the disabled components list in the registry.
-Retrieves the name of component's author.
-The size of the wzAuthor buffer.
A reference that receives the name of the component's author. The locale of the string depends on the value that the codec wrote to the registry at install time. For built-in components, these strings are always in English.
A reference that receives the actual length of the component's authors name. The author name is optional; if an author name is not specified by the component, the length returned is 0.
If this method succeeds, it returns
If cchAuthor is 0 and wzAuthor is
Retrieves the vendor
A reference that receives the component's vendor
If this method succeeds, it returns
Proxy function for the GetVersion method.
-If this function succeeds, it returns
Retrieves the component's specification version.
-The size of the wzSpecVersion buffer.
When this method returns, contain a culture invarient string of the component's specification version. The version form is NN.NN.NN.NN.
A reference that receives the actual length of the component's specification version. The specification version is optional; if a value is not specified by the component, the length returned is 0.
If this method succeeds, it returns
All built-in components return "1.0.0.0", except for pixel formats, which do not have a spec version.
If cchAuthor is 0 and wzAuthor is
Retrieves the component's friendly name, which is a human-readable display name for the component.
-The size of the wzFriendlyName buffer.
A reference that receives the friendly name of the component. The locale of the string depends on the value that the codec wrote to the registry at install time. For built-in components, these strings are always in English.
A reference that receives the actual length of the component's friendly name.
If this method succeeds, it returns
If cchFriendlyName is 0 and wzFriendlyName is
Provides information and functionality specific to the DDS image format.
-This interface is implemented by the WIC DDS codec. To obtain this interface, create an
Gets DDS-specific data.
-Gets DDS-specific data.
-A reference to the structure where the information is returned.
If this method succeeds, it returns
Retrieves the specified frame of the DDS image.
-The requested index within the texture array.
The requested mip level.
The requested slice within the 3D texture.
A reference to a
If this method succeeds, it returns
A DDS file can contain multiple images that are organized into a three level hierarchy. First, DDS file may contain multiple textures in a texture array. Second, each texture can have multiple mip levels. Finally, the texture may be a 3D (volume) texture and have multiple slices, each of which is a 2D texture. See the DDS documentation for more information.
WIC maps this three level hierarchy into a linear array of
Enables writing DDS format specific information to an encoder.
-This interface is implemented by the WIC DDS codec. To obtain this interface, create an
Gets or sets DDS-specific data.
-An application can call GetParameters to obtain the default DDS parameters, modify some or all of them, and then call SetParameters.
-Sets DDS-specific data.
-Points to the structure where the information is described.
If this method succeeds, it returns
You cannot call this method after you have started to write frame data, for example by calling
Setting DDS parameters using this method provides the DDS encoder with information about the expected number of frames and the dimensions and other parameters of each frame. The DDS encoder will fail if you do not set frame data that matches these expectations. For example, if you set WICDdsParameters::Width and Height to 32, and MipLevels to 6, the DDS encoder will expect 6 frames with the following dimensions:
Gets DDS-specific data.
-Points to the structure where the information is returned.
If this method succeeds, it returns
An application can call GetParameters to obtain the default DDS parameters, modify some or all of them, and then call SetParameters.
-Creates a new frame to encode.
-A reference to the newly created frame object.
Points to the location where the array index is returned.
Points to the location where the mip level index is returned.
Points to the location where the slice index is returned.
If this method succeeds, it returns
This is equivalent to
Provides access to a single frame of DDS image data in its native
This interface is implemented by the WIC DDS codec. To obtain this interface, create an
Gets information about the format in which the DDS image is stored.
-This information can be used for allocating memory or constructing Direct3D or Direct2D resources, for example by using
Gets the width and height, in blocks, of the DDS image.
-The width of the DDS image in blocks.
The height of the DDS image in blocks.
If this method succeeds, it returns
For block compressed textures, the returned width and height values do not completely define the texture size because the image is padded to fit the closest whole block size. For example, three BC1 textures with pixel dimensions of 1x1, 2x2 and 4x4 will all report pWidthInBlocks = 1 and pHeightInBlocks = 1.
If the texture does not use a block-compressed
Gets information about the format in which the DDS image is stored.
-Information about the DDS format.
If this method succeeds, it returns
This information can be used for allocating memory or constructing Direct3D or Direct2D resources, for example by using
Requests pixel data as it is natively stored within the DDS file.
-The rectangle to copy from the source. A
If the texture uses a block-compressed
The stride, in bytes, of the destination buffer. This represents the number of bytes from the buffer reference to the next row of data. If the texture uses a block-compressed
The size, in bytes, of the destination buffer.
A reference to the destination buffer.
If this method succeeds, it returns
If the texture does not use a block-compressed
If the texture uses a block-compressed
[This documentation is preliminary and is subject to change.]
Requests pixel data as it is natively stored within the DDS file.
-The rectangle to copy from the source. A
If the texture uses a block-compressed
The stride, in bytes, of the destination buffer. This represents the number of bytes from the buffer reference to the next row of data. If the texture uses a block-compressed
A reference to the destination buffer.
If this method succeeds, it returns
If the texture does not use a block-compressed
If the texture uses a block-compressed
Gets the current set of parameters.
-Gets or sets the exposure compensation stop value of the raw image.
-Gets or sets the named white point of the raw image.
-If the named white points are not supported by the raw image or the raw file contains named white points that are not supported by this API, the codec implementer should still mark this capability as supported.
If the named white points are not supported by the raw image, a best effort should be made to adjust the image to the named white point even when it isn't a pre-defined white point of the raw file.
If the raw file containes named white points not supported by this API, the codec implementer should support the named white points in
Gets or sets the white point Kelvin temperature of the raw image.
-Gets or sets the contrast value of the raw image.
-Gets or sets the current gamma setting of the raw image.
-Gets or sets the sharpness value of the raw image.
-Gets or sets the saturation value of the raw image.
-Gets or sets the tint value of the raw image.
-Gets or sets the noise reduction value of the raw image.
-Sets the destination color context.
-Gets or sets the current rotation angle.
-Gets or sets the current
Sets the notification callback method.
-Retrieves information about which capabilities are supported for a raw image.
-A reference that receives
If this method succeeds, it returns
It is recommended that a codec report that a capability is supported even if the results at the outer range limits are not of perfect quality.
-Sets the desired
If this method succeeds, it returns
Gets the current set of parameters.
-A reference that receives a reference to the current set of parameters.
If this method succeeds, it returns
Sets the exposure compensation stop value.
-The exposure compensation value. The value range for exposure compensation is -5.0 through +5.0, which equates to 10 full stops.
If this method succeeds, it returns
It is recommended that a codec report that this method is supported even if the results at the outer range limits are not of perfect quality.
-Gets the exposure compensation stop value of the raw image.
-A reference that receives the exposure compensation stop value. The default is the "as-shot" setting.
If this method succeeds, it returns
Sets the white point RGB values.
-The red white point value.
The green white point value.
The blue white point value.
If this method succeeds, it returns
Due to other white point setting methods (e.g. SetWhitePointKelvin), care must be taken by codec implementers to ensure proper interoperability. For instance, if the caller sets via a named white point then the codec implementer may whis to disable reading back the correspoinding Kelvin temperature. In specific cases where the codec implementer wishes to deny a given action because of previous calls,
Gets the white point RGB values.
-A reference that receives the red white point value.
A reference that receives the green white point value.
A reference that receives the blue white point value.
If this method succeeds, it returns
Sets the named white point of the raw file.
-A bitwise combination of the enumeration values.
If this method succeeds, it returns
If the named white points are not supported by the raw image or the raw file contains named white points that are not supported by this API, the codec implementer should still mark this capability as supported.
If the named white points are not supported by the raw image, a best effort should be made to adjust the image to the named white point even when it isn't a pre-defined white point of the raw file.
If the raw file containes named white points not supported by this API, the codec implementer should support the named white points in the API.
Due to other white point setting methods (e.g. SetWhitePointKelvin), care must be taken by codec implementers to ensure proper interoperability. For instance, if the caller sets via a named white point then the codec implementer may whis to disable reading back the correspoinding Kelvin temperature. In specific cases where the codec implementer wishes to deny a given action because of previous calls,
Gets the named white point of the raw image.
-A reference that receives the bitwise combination of the enumeration values.
If this method succeeds, it returns
If the named white points are not supported by the raw image or the raw file contains named white points that are not supported by this API, the codec implementer should still mark this capability as supported.
If the named white points are not supported by the raw image, a best effort should be made to adjust the image to the named white point even when it isn't a pre-defined white point of the raw file.
If the raw file containes named white points not supported by this API, the codec implementer should support the named white points in
Sets the white point Kelvin value.
-The white point Kelvin value. Acceptable Kelvin values are 1,500 through 30,000.
If this method succeeds, it returns
Codec implementers should faithfully adjust the color temperature within the range supported natively by the raw image. For values outside the native support range, the codec implementer should provide a best effort representation of the image at that color temperature.
Codec implementers should return
Codec implementers must ensure proper interoperability with other white point setting methods such as SetWhitePointRGB. For example, if the caller sets the white point via SetNamedWhitePoint then the codec implementer may want to disable reading back the correspoinding Kelvin temperature. In specific cases where the codec implementer wants to deny a given action because of previous calls,
Gets the white point Kelvin temperature of the raw image.
-A reference that receives the white point Kelvin temperature of the raw image. The default is the "as-shot" setting value.
If this method succeeds, it returns
Gets the information about the current Kelvin range of the raw image.
-A reference that receives the minimum Kelvin temperature.
A reference that receives the maximum Kelvin temperature.
A reference that receives the Kelvin step value.
If this method succeeds, it returns
Sets the contrast value of the raw image.
-The contrast value of the raw image. The default value is the "as-shot" setting. The value range for contrast is 0.0 through 1.0. The 0.0 lower limit represents no contrast applied to the image, while the 1.0 upper limit represents the highest amount of contrast that can be applied.
If this method succeeds, it returns
The codec implementer must determine what the upper range value represents and must determine how to map the value to their image processing routines.
-Gets the contrast value of the raw image.
-A reference that receives the contrast value of the raw image. The default value is the "as-shot" setting. The value range for contrast is 0.0 through 1.0. The 0.0 lower limit represents no contrast applied to the image, while the 1.0 upper limit represents the highest amount of contrast that can be applied.
If this method succeeds, it returns
Sets the desired gamma value.
-The desired gamma value.
If this method succeeds, it returns
Gets the current gamma setting of the raw image.
-A reference that receives the current gamma setting.
If this method succeeds, it returns
Sets the sharpness value of the raw image.
-The sharpness value of the raw image. The default value is the "as-shot" setting. The value range for sharpness is 0.0 through 1.0. The 0.0 lower limit represents no sharpening applied to the image, while the 1.0 upper limit represents the highest amount of sharpness that can be applied.
If this method succeeds, it returns
The codec implementer must determine what the upper range value represents and must determine how to map the value to their image processing routines.
-Gets the sharpness value of the raw image.
-A reference that receives the sharpness value of the raw image. The default value is the "as-shot" setting. The value range for sharpness is 0.0 through 1.0. The 0.0 lower limit represents no sharpening applied to the image, while the 1.0 upper limit represents the highest amount of sharpness that can be applied.
If this method succeeds, it returns
Sets the saturation value of the raw image.
-The saturation value of the raw image. The value range for saturation is 0.0 through 1.0. A value of 0.0 represents an image with a fully de-saturated image, while a value of 1.0 represents the highest amount of saturation that can be applied.
If this method succeeds, it returns
The codec implementer must determine what the upper range value represents and must determine how to map the value to their image processing routines.
-Gets the saturation value of the raw image.
-A reference that receives the saturation value of the raw image. The default value is the "as-shot" setting. The value range for saturation is 0.0 through 1.0. A value of 0.0 represents an image with a fully de-saturated image, while a value of 1.0 represents the highest amount of saturation that can be applied.
If this method succeeds, it returns
Sets the tint value of the raw image.
-The tint value of the raw image. The default value is the "as-shot" setting if it exists or 0.0. The value range for sharpness is -1.0 through +1.0. The -1.0 lower limit represents a full green bias to the image, while the 1.0 upper limit represents a full magenta bias.
If this method succeeds, it returns
The codec implementer must determine what the outer range values represent and must determine how to map the values to their image processing routines.
-Gets the tint value of the raw image.
-A reference that receives the tint value of the raw image. The default value is the "as-shot" setting if it exists or 0.0. The value range for sharpness is -1.0 through +1.0. The -1.0 lower limit represents a full green bias to the image, while the 1.0 upper limit represents a full magenta bias.
If this method succeeds, it returns
Sets the noise reduction value of the raw image.
-The noise reduction value of the raw image. The default value is the "as-shot" setting if it exists or 0.0. The value range for noise reduction is 0.0 through 1.0. The 0.0 lower limit represents no noise reduction applied to the image, while the 1.0 upper limit represents highest noise reduction amount that can be applied.
If this method succeeds, it returns
The codec implementer must determine what the upper range value represents and must determine how to map the value to their image processing routines.
-Gets the noise reduction value of the raw image.
-A reference that receives the noise reduction value of the raw image. The default value is the "as-shot" setting if it exists or 0.0. The value range for noise reduction is 0.0 through 1.0. The 0.0 lower limit represents no noise reduction applied to the image, while the 1.0 upper limit represents full highest noise reduction amount that can be applied.
If this method succeeds, it returns
Sets the destination color context.
-The destination color context.
If this method succeeds, it returns
Sets the tone curve for the raw image.
-The size of the pToneCurve structure.
The desired tone curve.
If this method succeeds, it returns
Gets the tone curve of the raw image.
-The size of the pToneCurve buffer.
A reference that receives the
A reference that receives the size needed to obtain the tone curve structure.
If this method succeeds, it returns
Sets the desired rotation angle.
-The desired rotation angle.
If this method succeeds, it returns
Gets the current rotation angle.
-A reference that receives the current rotation angle.
If this method succeeds, it returns
Sets the current
If this method succeeds, it returns
Gets the current
If this method succeeds, it returns
Sets the notification callback method.
-Pointer to the notification callback method.
If this method succeeds, it returns
An application-defined callback method used for raw image parameter change notifications.
-An application-defined callback method used for raw image parameter change notifications.
-A set of
If this method succeeds, it returns
Exposes methods that provide enumeration services for individual metadata items.
-Skips to given number of objects.
-The number of objects to skip.
If this method succeeds, it returns
Resets the current position to the beginning of the enumeration.
-If this method succeeds, it returns
Creates a copy of the current
If this method succeeds, it returns
Exposes methods used for in-place metadata editing. A fast metadata encoder enables you to add and remove metadata to an image without having to fully re-encode the image.
- A decoder must be created using the
Not all metadata formats support fast metadata encoding. The native metadata handlers that support metadata are IFD, Exif, XMP, and GPS.
If a fast metadata encoder fails, the image will need to be fully re-encoded to add the metadata.
-Proxy function for the GetMetadataQueryWriter method.
-Finalizes metadata changes to the image stream.
-If this method succeeds, it returns
If the commit fails and returns
If the commit fails for any reason, you will need to re-encode the image to ensure the new metadata is added to the image.
-Proxy function for the GetMetadataQueryWriter method.
-If this function succeeds, it returns
Initializes the format converter.
-If you do not have a predefined palette, you must first create one. Use InitializeFromBitmap to create the palette object, then pass it in along with your other parameters.
dither, pIPalette, alphaThresholdPercent, and paletteTranslate are used to mitigate color loss when converting to a reduced bit-depth format. For conversions that do not need these settings, the following parameters values should be used: dither set to
The basic algorithm involved when using an ordered dither requires a fixed palette, found in the
If colors in pIPalette do not closely match those in paletteTranslate, the mapping may produce undesireable results.
When converting a bitmap which has an alpha channel, such as a Portable Network Graphics (PNG), to 8bpp, the alpha channel is normally ignored. Any pixels which were transparent in the original bitmap show up as black in the final output because both transparent and black have pixel values of zero in the respective formats.
Some 8bpp content can contains an alpha color; for instance, the Graphics Interchange Format (GIF) format allows for a single palette entry to be used as a transparent color. For this type of content, alphaThresholdPercent specifies what percentage of transparency should map to the transparent color. Because the alpha value is directly proportional to the opacity (not transparency) of a pixel, the alphaThresholdPercent indicates what level of opacity is mapped to the fully transparent color. For instance, 9.8% implies that any pixel with an alpha value of less than 25 will be mapped to the transparent color. A value of 100% maps all pixels which are not fully opaque to the transparent color. Note that the palette should provide a transparent color. If it does not, the 'transparent' color will be the one closest to zero - often black.
-Initializes the format converter.
-The input bitmap to convert
The destination pixel format
The
The palette to use for conversion.
The alpha threshold to use for conversion.
The palette translation type to use for conversion.
If this method succeeds, it returns
If you do not have a predefined palette, you must first create one. Use InitializeFromBitmap to create the palette object, then pass it in along with your other parameters.
dither, pIPalette, alphaThresholdPercent, and paletteTranslate are used to mitigate color loss when converting to a reduced bit-depth format. For conversions that do not need these settings, the following parameters values should be used: dither set to
The basic algorithm involved when using an ordered dither requires a fixed palette, found in the
If colors in pIPalette do not closely match those in paletteTranslate, the mapping may produce undesireable results.
When converting a bitmap which has an alpha channel, such as a Portable Network Graphics (PNG), to 8bpp, the alpha channel is normally ignored. Any pixels which were transparent in the original bitmap show up as black in the final output because both transparent and black have pixel values of zero in the respective formats.
Some 8bpp content can contains an alpha color; for instance, the Graphics Interchange Format (GIF) format allows for a single palette entry to be used as a transparent color. For this type of content, alphaThresholdPercent specifies what percentage of transparency should map to the transparent color. Because the alpha value is directly proportional to the opacity (not transparency) of a pixel, the alphaThresholdPercent indicates what level of opacity is mapped to the fully transparent color. For instance, 9.8% implies that any pixel with an alpha value of less than 25 will be mapped to the transparent color. A value of 100% maps all pixels which are not fully opaque to the transparent color. Note that the palette should provide a transparent color. If it does not, the 'transparent' color will be the one closest to zero - often black.
-Determines if the source pixel format can be converted to the destination pixel format.
-The source pixel format.
The destionation pixel format.
A reference that receives a value indicating whether the source pixel format can be converted to the destination pixel format.
Exposes methods that provide information about a pixel format converter.
-Retrieves a list of GUIDs that signify which pixel formats the converter supports.
-The size of the pPixelFormatGUIDs array.
Pointer to a
The actual array size needed to retrieve all pixel formats supported by the converter.
If this method succeeds, it returns
The format converter does not necessarily guarantee symmetricality with respect to conversion; that is, a converter may be able to convert FROM a particular format without actually being able to convert TO a particular format. In order to test symmetricality, use CanConvert.
To determine the number of pixel formats a coverter can handle, set cFormats to 0
and pPixelFormatGUIDs to
. The converter will fill pcActual with the number of formats supported by that converter.
Creates a new
If this method succeeds, it returns
Encodes
Encodes the image to the frame given by the
If this method succeeds, it returns
The image passed in must be created on the same device as in
You must correctly and independently have set up the
Encodes the image as a thumbnail to the frame given by the
If this method succeeds, it returns
The image passed in must be created on the same device as in
You must correctly and independently have set up the
Encodes the given image as the thumbnail to the given WIC bitmap encoder.
-The Direct2D image that will be encoded.
The encoder on which the thumbnail is set.
Additional parameters to control encoding.
If this method succeeds, it returns
You must create the image that you pass in on the same device as in
Before you call WriteThumbnail, you must set up the
If WriteThumbnail fails, it might return E_OUTOFMEMORY,
Exposes methods used to create components for the Windows Imaging Component (WIC) such as decoders, encoders and pixel format converters.
-Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
When a decoder is created using this method, the file handle must remain alive during the lifetime of the decoder.
-Proxy function for the CreateComponentInfo method.
-If this function succeeds, it returns
Creates a new instance of
If this method succeeds, it returns
Other values may be available for both guidContainerFormat and pguidVendor depending on the installed WIC-enabled encoders. The values listed are those that are natively supported by the operating system.
-Creates a new instance of the
If this method succeeds, it returns
Other values may be available for both guidContainerFormat and pguidVendor depending on the installed WIC-enabled encoders. The values listed are those that are natively supported by the operating system.
-Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of an
If this method succeeds, it returns
Proxy function for the CreateBitmapClipper method.
-If this function succeeds, it returns
Proxy function for the CreateBitmapFlipRotator method.
-If this function succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
Creates a new instance of the
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Creates a
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Providing a rectangle that is larger than the source will produce undefined results.
This method always creates a separate copy of the source image, similar to the cache option
Creates an
If this method succeeds, it returns
The size of the
The stride of the destination bitmap will equal the stride of the source data, regardless of the width and height specified.
The pixelFormat parameter defines the pixel format for both the input data and the output bitmap.
-Creates an
If this method succeeds, it returns
For a non-palletized bitmap, set
Creates an
If this method succeeds, it returns
Creates an
If this method succeeds, it returns
Component types must be enumerated seperately. Combinations of component types and
Creates a new instance of the fast metadata encoder based on the given
If this method succeeds, it returns
The Windows provided codecs do not support fast metadata encoding at the decoder level, and only support fast metadata encoding at the frame level. To create a fast metadata encoder from a frame, see CreateFastMetadataEncoderFromFrameDecode.
-Creates a new instance of the fast metadata encoder based on the given image frame.
-The
When this method returns, contains a reference to a new fast metadata encoder.
If this method succeeds, it returns
For a list of support metadata formats for fast metadata encoding, see WIC Metadata Overview.
-Proxy function for the CreateQueryWriter method.
-If this function succeeds, it returns
Proxy function for the CreateQueryWriterFromReader method.
-If this function succeeds, it returns
An extension of the WIC factory interface that includes the ability to create an
Creates a new image encoder object.
-The
A reference to a variable that receives a reference to the
If this method succeeds, it returns
You must create images to pass to the image encoder on the same Direct2D device that you pass to this method.
You are responsible for setting up the bitmap encoder itself through the existing
Exposes methods for decoding JPEG images. Provides access to the Start Of Frame (SOF) header, Start of Scan (SOS) header, the Huffman and Quantization tables, and the compressed JPEG JPEG data. Also enables indexing for efficient random access.
-Obtain this interface by calling IUnknown::QueryInterface on the Windows-provided IWICBitmapFrameDecoder interface for the JPEG decoder.
-Retrieves header data from the entire frame. The result includes parameters from the Start Of Frame (SOF) marker for the scan as well as parameters derived from other metadata such as the color model of the compressed data.
-Retrieves a value indicating whether this decoder supports indexing for efficient random access.
-True if indexing is supported; otherwise, false.
Returns
Indexing is only supported for some JPEG types. Call this method
-Enables indexing of the JPEG for efficient random access.
-A value specifying whether indexes should be generated immediately or deferred until a future call to
The granularity of the indexing, in pixels.
Returns
This method enables efficient random-access to the image pixels at the expense of memory usage. The amount of memory required for indexing depends on the requested index granularity. Unless SetIndexing is called, it is much more efficient to access a JPEG by progressing through its pixels top-down during calls to
This method will fail if indexing is unsupported on the file.
The provided interval size controls horizontal spacing of index entries. This value is internally rounded up according to the JPEG?s MCU (minimum coded unit) size, which is typically either 8 or 16 unscaled pixels. The vertical size of the index interval is always equal to one MCU size.
Indexes can be generated immediately, or during future calls to
Removes the indexing from a JPEG that has been indexed using
Returns
Retrieves a copy of the AC Huffman table for the specified scan and table.
-The zero-based index of the scan for which data is retrieved.
The index of the AC Huffman table to retrieve. Valid indices for a given scan can be determined by retrieving the scan header with
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pAcHuffmanTable is |
?
Retrieves a copy of the DC Huffman table for the specified scan and table.
-The zero-based index of the scan for which data is retrieved.
The index of the DC Huffman table to retrieve. Valid indices for a given scan can be determined by retrieving the scan header with
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pTable is |
?
Retrieves a copy of the quantization table.
-The zero-based index of the scan for which data is retrieved.
The index of the quantization table to retrieve. Valid indices for a given scan can be determined by retrieving the scan header with
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pTable is |
?
Retrieves header data from the entire frame. The result includes parameters from the Start Of Frame (SOF) marker for the scan as well as parameters derived from other metadata such as the color model of the compressed data.
-A reference that receives the frame header data.
Returns
Retrieves parameters from the Start Of Scan (SOS) marker for the scan with the specified index.
-The index of the scan for which header data is retrieved.
A reference that receives the frame header data.
Returns
Retrieves a copy of the compressed JPEG scan directly from the WIC decoder frame's output stream.
-The zero-based index of the scan for which data is retrieved.
The byte position in the scan data to begin copying. Use 0 on the first call. If the output buffer size is insufficient to store the entire scan, this offset allows you to resume copying from the end of the previous copy operation.
The size, in bytes, of the pbScanData array.
A reference that receives the table data. This parameter must not be
A reference that receives the size of the scan data actually copied into pbScanData. The size returned may be smaller that the size of cbScanData. This parameter may be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
?
Exposes methods for writing compressed JPEG scan data directly to the WIC encoder's output stream. Also provides access to the Huffman and quantization tables.
-Obtain this interface by calling IUnknown::QueryInterface on the Windows-provided IWICBitmapFrameEncoder interface for the JPEG encoder.
The WIC JPEG encoder supports a smaller subset of JPEG features than the decoder does.
Retrieves a copy of the AC Huffman table for the specified scan and table.
-The zero-based index of the scan for which data is retrieved.
The index of the AC Huffman table to retrieve.
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pAcHuffmanTable is |
?
Retrieves a copy of the DC Huffman table for the specified scan and table.
-The zero-based index of the scan for which data is retrieved.
The index of the DC Huffman table to retrieve.
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pTable is |
?
Retrieves a copy of the quantization table.
-The zero-based index of the scan for which data is retrieved.
The index of the quantization table to retrieve.
A reference that receives the table data. This parameter must not be
This method can return one of these values.
Return value | Description |
---|---|
| The operation was successful. |
| The specified scan index is invalid. |
| Can occur if pTable is |
?
Writes scan data to a JPEG frame.
-The size of the data in the pbScanData parameter.
The scan data to write.
Returns
WriteScan may be called multiple times. Each call appends the scan data specified to any previous scan data. Complete the scan by calling
Any calls to set encoder parameters or image metadata that will appear before the scan data in the resulting JPEG file must be completed before the first call to this method. This includes calls to
Exposes methods for retrieving metadata blocks and items from a decoder or its image frames using a metadata query expression.
-A metadata query reader uses metadata query expressions to access embedded metadata. For more information on the metadata query language, see the Metadata Query Language Overview.
The benefit of the query reader is the ability to access a metadata item in a single step. -
The query reader also provides the way to traverse the whole set of metadata hierarchy with the help of the GetEnumerator method. - However, it is not recommended to use this method since IWICMetadataBlockReader and IWICMetadataReader provide a more convenient and cheaper way. -
-Gets the metadata query readers container format.
-Gets the metadata query readers container format.
-Pointer that receives the cointainer format
If this method succeeds, it returns
Retrieves the current path relative to the root metadata block.
-The length of the wzNamespace buffer.
Pointer that receives the current namespace location.
The actual buffer length that was needed to retrieve the current namespace location.
If this method succeeds, it returns
If you pass
If the query reader is relative to the top of the metadata hierarchy, it will return a single-char string.
If the query reader is relative to a nested metadata block, this method will return the path to the current query reader.
-Retrieves the metadata block or item identified by a metadata query expression.
-The query expression to the requested metadata block or item.
When this method returns, contains the metadata block or item requested.
If this method succeeds, it returns
GetMetadataByName uses metadata query expressions to access embedded metadata. For more information on the metadata query language, see the Metadata Query Language Overview.
If multiple blocks or items exist that are expressed by the same query expression, the first metadata block or item found will be returned.
-Gets an enumerator of all metadata items at the current relative location within the metadata hierarchy.
-A reference to a variable that receives a reference to the
The retrieved enumerator only contains query strings for the metadata blocks and items in the current level of the hierarchy. -
-Exposes methods for setting or removing metadata blocks and items to an encoder or its image frames using a metadata query expression.
-A metadata query writer uses metadata query expressions to set or remove metadata. For more information on the metadata query language, see the Metadata Query Language Overview.
-Sets a metadata item to a specific location.
-The name of the metadata item.
The metadata to set.
If this method succeeds, it returns
SetMetadataByName uses metadata query expressions to remove metadata. For more information on the metadata query language, see the Metadata Query Language Overview.
If the value set is a nested metadata block then use variant type VT_UNKNOWN
and pvarValue pointing to the
Proxy function for the RemoveMetadataByName method.
-If this function succeeds, it returns
Exposes methods for accessing and building a color table, primarily for indexed pixel formats.
-If the
InitializeFromBitmap's fAddTransparentColor parameter will add a transparent color to the end of the color collection if its size if less than 256, otherwise index 255 will be replaced with the transparent color. If a pre-defined palette type is used, it will change to BitmapPaletteTypeCustom since it no longer matches the predefined palette.
The palette interface is an auxiliary imaging interface in that it does not directly concern bitmaps and pixels; rather it provides indexed color translation for indexed bitmaps. For an indexed pixel format with M bits per pixels: (The number of colors in the palette) greater than 2^M.
Traditionally the basic operation of the palette is to provide a translation from a byte (or smaller) index into a 32bpp color value. This is often accomplished by a 256 entry table of color values.
-Retrieves the
WICBitmapPaletteCustom is used for palettes initialized from both InitializeCustom and InitializeFromBitmap. There is no distinction is made between optimized and custom palettes.
-Proxy function for the GetColorCount method.
-Retrieves a value that describes whether the palette is black and white.
-A palette is considered to be black and white only if it contains exactly two entries, one full black (0xFF000000) and one full white (0xFFFFFFF). -
-Retrieves a value that describes whether a palette is grayscale.
-A palette is considered grayscale only if, for every entry, the alpha value is 0xFF and the red, green and blue values match. -
-Initializes the palette to one of the pre-defined palettes specified by
If this method succeeds, it returns
If a transparent color is added to a palette, the palette is no longer predefined and is returned as
Proxy function for the InitializeCustom method.
-If this function succeeds, it returns
Initializes a palette using a computed optimized values based on the reference bitmap.
-Pointer to the source bitmap.
The number of colors to initialize the palette with.
A value to indicate whether to add a transparent color.
If this method succeeds, it returns
The resulting palette contains the specified number of colors which best represent the colors present in the bitmap. The algorithm operates on the opaque RGB color value of each pixel in the reference bitmap and hence ignores any alpha values. If a transparent color is required, set the fAddTransparentColor parameter to TRUE and one fewer optimized color will be computed, reducing the colorCount, and a fully transparent color entry will be added.
-Initialize the palette based on a given palette.
-Pointer to the source palette.
If this method succeeds, it returns
Retrieves the
If this method succeeds, it returns
WICBitmapPaletteCustom is used for palettes initialized from both InitializeCustom and InitializeFromBitmap. There is no distinction is made between optimized and custom palettes.
-Proxy function for the GetColorCount method.
-If this function succeeds, it returns
Fills out the supplied color array with the colors from the internal color table. The color array should be sized according to the return results from GetColorCount.
-If this method succeeds, it returns
Retrieves a value that describes whether the palette is black and white.
-A reference to a variable that receives a boolean value that indicates whether the palette is black and white. TRUE indicates that the palette is black and white; otherwise,
If this method succeeds, it returns
A palette is considered to be black and white only if it contains exactly two entries, one full black (0xFF000000) and one full white (0xFFFFFFF). -
-Retrieves a value that describes whether a palette is grayscale.
-A reference to a variable that receives a boolean value that indicates whether the palette is grayscale. TRUE indicates that the palette is grayscale; otherwise
If this method succeeds, it returns
A palette is considered grayscale only if, for every entry, the alpha value is 0xFF and the red, green and blue values match. -
-Proxy function for the HasAlpha method.
-If this function succeeds, it returns
Exposes methods that provide information about a pixel format.
-Gets the pixel format
Gets the pixel format's
The returned color context is the default color space for the pixel format. However, if an
Proxy function for the GetBitsPerPixel method.
-Proxy function for the GetChannelCount method.
-Gets the pixel format
Pointer that receives the pixel format
If this method succeeds, it returns
Gets the pixel format's
If this method succeeds, it returns
The returned color context is the default color space for the pixel format. However, if an
Proxy function for the GetBitsPerPixel method.
-If this function succeeds, it returns
Proxy function for the GetChannelCount method.
-If this function succeeds, it returns
Gets the pixel format's channel mask.
-The index to the channel mask to retrieve.
The size of the pbMaskBuffer buffer.
Pointer to the mask buffer.
The actual buffer size needed to obtain the channel mask.
If this method succeeds, it returns
If 0 and
Extends
Returns whether the format supports transparent pixels.
-An indexed pixel format will not return TRUE even though it may have some transparency support. -
-Returns whether the format supports transparent pixels.
-Returns TRUE if the pixel format supports transparency; otherwise,
If this method succeeds, it returns
An indexed pixel format will not return TRUE even though it may have some transparency support. -
-Returns the
If this method succeeds, it returns
Allows planar component image pixels to be written to an encoder. When supported by the encoder, this allows an application to encode planar component image data without first converting to an interleaved pixel format.
You can use QueryInterface to obtain this interface from the Windows provided implementation of
Encoding YCbCr data using
Writes lines from the source planes to the encoded format.
-The number of lines to encode. See the Remarks section for WIC Jpeg specific line count restrictions.
Specifies the source buffers for each component plane encoded.
The number of component planes specified by the pPlanes parameter.
If the planes and source rectangle do not meet the requirements, this method fails with
Successive WritePixels calls are assumed sequentially add scanlines to the output image.
The interleaved pixel format set via
WIC JPEG Encoder:
- QueryInterface can be used to obtain this interface from the WIC JPEG
Depending upon the configured chroma subsampling, the lineCount parameter has the following restrictions: -
Chroma Subsampling | Line Count Restriction | Chroma Plane Width | Chroma Plane Height |
---|---|---|---|
4:2:0 | Multiple of 2, unless the call covers the last scanline of the image | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight / 2 Rounded up to the nearest integer. |
4:2:2 | Any | lumaWidth / 2 Rounded up to the nearest integer. | Any |
4:4:4 | Any | Any | Any |
4:4:0 | Multiple of 2, unless the call covers the last scanline of the image | Any | llumaHeight / 2 Rounded up to the nearest integer. |
?
The full scanline width must be encoded, and the width of the bitmap sources must match their planar configuration.
Additionally, if a pixel format is set via
The supported pixel formats of the bitmap sources passed into this method are as follows: -
Plane Count | Plane 1 | Plane 2 | Plane 3 |
---|---|---|---|
3 | |||
2 | N/A |
?
-Writes lines from the source planes to the encoded format.
-Specifies an array of
The number of component planes specified by the planes parameter.
The source rectangle of pixels to encode from the
If the planes and source rectangle do not meet the requirements, this method fails with
If the
Successive WriteSource calls are assumed sequentially add scanlines to the output image.
The interleaved pixel format set via
WIC JPEG Encoder:
- QueryInterface can be used to obtain this interface from the WIC JPEG
Depending upon the configured chroma subsampling, the lineCount parameter has the following restrictions: -
Chroma Subsampling | X Coordinate | Y Coordinate | Chroma Width | Chroma Height |
---|---|---|---|---|
4:2:0 | Multiple of 2 | Multiple of 2 | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight / 2 Rounded up to the nearest integer. |
4:2:2 | Multiple of 2 | Any | lumaWidth / 2 Rounded up to the nearest integer. | Any |
4:4:4 | Any | Any | Any | Any |
4:4:0 | Any | Multiple of 2 | lumaWidth | llumaHeight / 2 Rounded up to the nearest integer. |
?
The full scanline width must be encoded, and the width of the bitmap sources must match their planar configuration.
Additionally, if a pixel format is set via
The supported pixel formats of the bitmap sources passed into this method are as follows: -
Plane Count | Plane 1 | Plane 2 | Plane 3 |
---|---|---|---|
3 | |||
2 | N/A |
?
-Writes lines from the source planes to the encoded format.
-Specifies an array of
The number of component planes specified by the planes parameter.
The source rectangle of pixels to encode from the
If the planes and source rectangle do not meet the requirements, this method fails with
If the
Successive WriteSource calls are assumed sequentially add scanlines to the output image.
The interleaved pixel format set via
WIC JPEG Encoder:
- QueryInterface can be used to obtain this interface from the WIC JPEG
Depending upon the configured chroma subsampling, the lineCount parameter has the following restrictions: -
Chroma Subsampling | X Coordinate | Y Coordinate | Chroma Width | Chroma Height |
---|---|---|---|---|
4:2:0 | Multiple of 2 | Multiple of 2 | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight / 2 Rounded up to the nearest integer. |
4:2:2 | Multiple of 2 | Any | lumaWidth / 2 Rounded up to the nearest integer. | Any |
4:4:4 | Any | Any | Any | Any |
4:4:0 | Any | Multiple of 2 | lumaWidth | llumaHeight / 2 Rounded up to the nearest integer. |
?
The full scanline width must be encoded, and the width of the bitmap sources must match their planar configuration.
Additionally, if a pixel format is set via
The supported pixel formats of the bitmap sources passed into this method are as follows: -
Plane Count | Plane 1 | Plane 2 | Plane 3 |
---|---|---|---|
3 | |||
2 | N/A |
?
-Writes lines from the source planes to the encoded format.
-Specifies an array of
The number of component planes specified by the planes parameter.
The source rectangle of pixels to encode from the
If the planes and source rectangle do not meet the requirements, this method fails with
If the
Successive WriteSource calls are assumed sequentially add scanlines to the output image.
The interleaved pixel format set via
WIC JPEG Encoder:
- QueryInterface can be used to obtain this interface from the WIC JPEG
Depending upon the configured chroma subsampling, the lineCount parameter has the following restrictions: -
Chroma Subsampling | X Coordinate | Y Coordinate | Chroma Width | Chroma Height |
---|---|---|---|---|
4:2:0 | Multiple of 2 | Multiple of 2 | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight / 2 Rounded up to the nearest integer. |
4:2:2 | Multiple of 2 | Any | lumaWidth / 2 Rounded up to the nearest integer. | Any |
4:4:4 | Any | Any | Any | Any |
4:4:0 | Any | Multiple of 2 | lumaWidth | llumaHeight / 2 Rounded up to the nearest integer. |
?
The full scanline width must be encoded, and the width of the bitmap sources must match their planar configuration.
Additionally, if a pixel format is set via
The supported pixel formats of the bitmap sources passed into this method are as follows: -
Plane Count | Plane 1 | Plane 2 | Plane 3 |
---|---|---|---|
3 | |||
2 | N/A |
?
-Provides access to planar Y?CbCr pixel formats where pixel components are stored in separate component planes. This interface also allows access to other codec optimizations for flip/rotate, scale, and format conversion to other Y?CbCr planar formats; this is similar to the pre-existing
QueryInterface can be used to obtain this interface from the Windows provided implementations of
Use this method to determine if a desired planar output is supported and allow the caller to choose an optimized code path if it is. Otherwise, callers should fall back to
The following transforms can be checked:
When a transform is supported, this method returns the description of the resulting planes in the pPlaneDescriptions parameter. -
-Check the value of pfIsSupported to determine if the transform is supported via
Copies pixels into the destination planes. Configured by the supplied input parameters.
If a dstTransform, scale, or format conversion is specified, cbStride is the transformed stride and is based on the destination pixel format of the pDstPlanes parameter, not the original source's pixel format.
-The source rectangle of pixels to copy.
The width to scale the source bitmap. This parameter must be equal to a value obtainable through IWICPlanarBitmapSourceTransform:: DoesSupportTransform.
The height to scale the source bitmap. This parameter must be equal to a value obtainable through IWICPlanarBitmapSourceTransform:: DoesSupportTransform.
The desired rotation or flip to perform prior to the pixel copy. A rotate can be combined with a flip horizontal or a flip vertical, see
Used to specify additional configuration options for the transform. See
WIC JPEG Decoder:
Specifies the pixel format and output buffer for each component plane. The number of planes and pixel format of each plane must match values obtainable through
The number of component planes specified by the pDstPlanes parameter.
If the specified scale, flip/rotate, and planar format configuration is not supported this method fails with
WIC JPEG Decoder: - Depending on the configured chroma subsampling of the image, the source rectangle has the following restrictions: -
Chroma Subsampling | X Coordinate | Y Coordinate | Chroma Width | Chroma Height |
---|---|---|---|---|
4:2:0 | Multiple of 2 | Multiple of 2 | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight / 2 Rounded up to the nearest integer. |
4:2:2 | Multiple of 2 | Any | lumaWidth / 2 Rounded up to the nearest integer. | lumaHeight |
4:4:4 | Any | Any | llumaWidth | llumaHeight |
4:4:0 | Any | Multiple of 2 | lumaWidth | llumaHeight / 2 Rounded up to the nearest integer. |
?
The pDstPlanes parameter supports the following pixel formats.
Plane Count | Plane 1 | Plane 2 | Plane 3 |
---|---|---|---|
3 | |||
2 | N/A |
?
-Allows a format converter to be initialized with a planar source. You can use QueryInterface to obtain this interface from the Windows provided implementation of
Initializes a format converter with a planar source, and specifies the interleaved output pixel format.
-An array of
The number of component planes specified by the planes parameter.
The destination interleaved pixel format.
The
The palette to use for conversion.
The alpha threshold to use for conversion.
The palette translation type to use for conversion.
If this method succeeds, it returns
Initializes a format converter with a planar source, and specifies the interleaved output pixel format.
-An array of
The number of component planes specified by the planes parameter.
The destination interleaved pixel format.
The
The palette to use for conversion.
The alpha threshold to use for conversion.
The palette translation type to use for conversion.
If this method succeeds, it returns
Initializes a format converter with a planar source, and specifies the interleaved output pixel format.
-An array of
The number of component planes specified by the planes parameter.
The destination interleaved pixel format.
The
The palette to use for conversion.
The alpha threshold to use for conversion.
The palette translation type to use for conversion.
If this method succeeds, it returns
Query if the format converter can convert from one format to another.
-An array of WIC pixel formats that represents source image planes.
The number of source pixel formats specified by the pSrcFormats parameter.
The destination interleaved pixel format.
True if the conversion is supported.
If the conversion is not supported, this method returns
If this method fails, the out parameter pfCanConvert is invalid.
To specify an interleaved input pixel format, provide a length 1 array to pSrcPixelFormats.
-Notify method is documented only for compliance; its use is not recommended and may be altered or unavailable in the future. Instead, and use RegisterProgressNotification. -
-If this method succeeds, it returns
Exposes methods for obtaining information about and controlling progressive decoding.
-Images can only be progressively decoded if they were progressively encoded. Progressive images automatically start at the highest (best quality) progressive level. The caller must manually set the decoder to a lower progressive level.
E_NOTIMPL is returned if the codec does not support progressive level decoding.
-Gets the number of levels of progressive decoding supported by the CODEC.
-Users should not use this function to iterate through the progressive levels of a progressive JPEG image. JPEG progressive levels are determined by the image and do not have a fixed level count. Using this method will force the application to wait for all progressive levels to be downloaded before it can return. Instead, applications should use the following code to iterate through the progressive levels of a progressive JPEG image.
-Gets or sets the decoder's current progressive level.
-The level always defaults to the highest progressive level. In order to decode a lower progressive level, SetCurrentLevel must first be called.
-Gets the number of levels of progressive decoding supported by the CODEC.
-Indicates the number of levels supported by the CODEC.
If this method succeeds, it returns
Users should not use this function to iterate through the progressive levels of a progressive JPEG image. JPEG progressive levels are determined by the image and do not have a fixed level count. Using this method will force the application to wait for all progressive levels to be downloaded before it can return. Instead, applications should use the following code to iterate through the progressive levels of a progressive JPEG image.
-Gets the decoder's current progressive level.
-Indicates the current level specified.
If this method succeeds, it returns
The level always defaults to the highest progressive level. In order to decode a lower progressive level, SetCurrentLevel must first be called.
-Specifies the level to retrieve on the next call to CopyPixels.
-If this method succeeds, it returns
A call does not have to request every level supported. If a caller requests level 1, without having previously requested level 0, the bits returned by the next call to CopyPixels will include both levels.
If the requested level is invalid, the error returned is
Represents a Windows Imaging Component (WIC) stream for referencing imaging and metadata content.
-Decoders and metadata handlers are expected to create sub streams of whatever stream they hold when handing off control for embedded metadata to another metadata handler. If the stream is not restricted then use MAXLONGLONG as the max size and offset 0.
The
Initializes a stream from another stream. Access rights are inherited from the underlying stream.
-The initialize stream.
If this method succeeds, it returns
Initializes a stream from a particular file.
-The file used to initialize the stream.
The desired file access mode.
Value | Meaning |
---|---|
| Read access. |
| Write access. |
?
If this method succeeds, it returns
The
Initializes a stream to treat a block of memory as a stream. The stream cannot grow beyond the buffer size.
-Pointer to the buffer used to initialize the stream.
The size of buffer.
If this method succeeds, it returns
This method should be avoided whenever possible. The caller is responsible for ensuring the memory block is valid for the lifetime of the stream when using InitializeFromMemory. A workaround for this behavior is to create an
If you require a growable memory stream, use CreateStreamOnHGlobal.
-Initializes the stream as a substream of another stream.
-Pointer to the input stream.
The stream offset used to create the new stream.
The maximum size of the stream.
If this method succeeds, it returns
The stream functions with its own stream position, independent of the underlying stream but restricted to a region. All seek positions are relative to the sub region. It is allowed, though not recommended, to have multiple writable sub streams overlapping the same range.
-Contains members that identify a pattern within an image file which can be used to identify a particular format.
-The offset the pattern is located in the file.
The pattern length.
The actual pattern.
The pattern mask.
The end of the stream.
Specifies the pixel format, buffer, stride and size of a component plane for a planar pixel format.
-Describes the pixel format of the plane.
Pointer to the buffer that holds the plane?s pixel components.
The stride of the buffer ponted to by pbData. Stride indicates the total number of bytes to go from the beginning of one scanline to the beginning of the next scanline.
The total size of the buffer pointed to by pbBuffer.
Specifies the pixel format and size of a component plane.
-Describes the pixel format of the plane.
Component width of the plane.
Component height of the plane.
Specifies the
Specifies the DDS image dimension,
This defines parameters that you can use to override the default parameters normally used when encoding an image.
-If this parameter is not passed to the encoding API, the encoder uses these settings.
The pixel format to which the image is processed before it is written to the encoder.
The DPI in the x dimension.
The DPI in the y dimension.
The top corner in pixels of the image space to be encoded to the destination.
The left corner in pixels of the image space to be encoded to the destination.
The width in pixels of the part of the image to write.
The height in pixels of the part of the image to write.
Represents a JPEG frame header.
-Get the frame header for an image by calling
The width of the JPEG frame.
The height of the JPEG frame.
The transfer matrix of the JPEG frame.
The scan type of the JPEG frame.
The number of components in the frame.
The component identifiers.
The sample factors. Use one of the following constants, described in
The format of the quantization table indices. Use one of the following constants, described in
Represents a JPEG frame header.
-Get the scan header for an image by calling
The number of components in the scan.
The interval of reset markers within the scan.
The component identifiers.
The format of the quantization table indices. Use one of the following constants, described in
The start of the spectral selection.
The end of the spectral selection.
The successive approximation high.
The successive approximation low.
Defines raw codec capabilites.
-Size of the
The codec's major version.
The codec's minor version.
The
The
The
The
The
The
The
The
The
The
The
The
The
The
The
Represents a raw image tone curve.
-The number of tone curve points.
The array of tone curve points.
Represents a raw image tone curve point.
-The tone curve input.
The tone curve output.
The blend-state interface holds a description for blending state that you can bind to the output-merger stage.
-Blending applies a simple function to combine output values from a pixel shader with data in a render target. You have control over how the pixels are blended by using a predefined set of blending operations and preblending operations.
To create a blend-state object, call
Gets the description for blending state that you used to create the blend-state object.
-You use the description for blending state in a call to the
Gets the description for blending state that you used to create the blend-state object.
-A reference to a
You use the description for blending state in a call to the
The blend-state interface holds a description for blending state that you can bind to the output-merger stage. This blend-state interface supports logical operations as well as blending operations.
-Blending applies a simple function to combine output values from a pixel shader with data in a render target. You have control over how the pixels are blended by using a predefined set of blending operations and preblending operations.
To create a blend-state object, call
Gets the description for blending state that you used to create the blend-state object.
-You use the description for blending state in a call to the
Gets the description for blending state that you used to create the blend-state object.
-A reference to a
You use the description for blending state in a call to the
Describes the blend state that you use in a call to
Here are the default values for blend state.
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
Note?? If the driver type is set to
Describes the blend state that you use in a call to
Here are the default values for blend state.
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].LogicOpEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].LogicOp | |
RenderTarget[0].RenderTargetWriteMask |
?
If the driver type is set to
When you set the LogicOpEnable member of the first element of the RenderTarget array (RenderTarget[0]) to TRUE, you must also set the BlendEnable member of RenderTarget[0] to
A buffer interface accesses a buffer resource, which is unstructured memory. Buffers typically store vertex or index data.
-There are three types of buffers: vertex, index, or a shader-constant buffer. Create a buffer resource by calling
A buffer must be bound to the pipeline before it can be accessed. Buffers can be bound to the input-assembler stage by calls to
Buffers can be bound to multiple pipeline stages simultaneously for reading. A buffer can also be bound to a single pipeline stage for writing; however, the same buffer cannot be bound for reading and writing simultaneously.
-Get the properties of a buffer resource.
-Get the properties of a buffer resource.
-Pointer to a resource description (see
Describes a buffer resource.
-This structure is used by
In addition to this structure, you can also use the CD3D11_BUFFER_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a buffer description.
If the bind flag is
Size of the buffer in bytes.
Identify how the buffer is expected to be read from and written to. Frequency of update is a key factor. The most common value is typically
Identify how the buffer will be bound to the pipeline. Flags (see
CPU access flags (see
Miscellaneous flags (see
The size of each element in the buffer structure (in bytes) when the buffer represents a structured buffer. For more info about structured buffers, see Structured Buffer.
The size value in StructureByteStride must match the size of the format that you use for views of the buffer. For example, if you use a shader resource view (SRV) to read a buffer in a pixel shader, the SRV format size must match the size value in StructureByteStride.
This interface encapsulates an HLSL class.
-This interface is created by calling
Gets the
For more information about using the
Windows?Phone?8: This API is supported.
-Gets a description of the current HLSL class.
- For more information about using the
An instance is not restricted to being used for a single type in a single shader. An instance is flexible and can be used for any shader that used the same type name or instance name when the instance was generated.
An instance does not replace the importance of reflection for a particular shader since a gotten instance will not know its slot location and a created instance only specifies a type name.
Windows?Phone?8: This API is supported.
- Gets the
For more information about using the
Windows?Phone?8: This API is supported.
-Gets a description of the current HLSL class.
- A reference to a
For more information about using the
An instance is not restricted to being used for a single type in a single shader. An instance is flexible and can be used for any shader that used the same type name or instance name when the instance was generated.
An instance does not replace the importance of reflection for a particular shader since a gotten instance will not know its slot location and a created instance only specifies a type name.
Windows?Phone?8: This API is supported.
-Gets the instance name of the current HLSL class.
-The instance name of the current HLSL class.
The length of the pInstanceName parameter.
GetInstanceName will return a valid name only for instances acquired using
For more information about using the
Windows?Phone?8: This API is supported.
-Gets the type of the current HLSL class.
-Type of the current HLSL class.
The length of the pTypeName parameter.
GetTypeName will return a valid name only for instances acquired using
For more information about using the
Windows?Phone?8: This API is supported.
-This interface encapsulates an HLSL dynamic linkage.
-A class linkage object can hold up to 64K gotten instances. A gotten instance is a handle that references a variable name in any shader that is created with that linkage object. When you create a shader with a class linkage object, the runtime gathers these instances and stores them in the class linkage object. For more information about how a class linkage object is used, see Storing Variables and Types for Shaders to Share.
An
Gets the class-instance object that represents the specified HLSL class.
-The name of a class for which to get the class instance.
The index of the class instance.
The address of a reference to an
For more information about using the
A class instance must have at least 1 data member in order to be available for the runtime to use with
Windows?Phone?8: This API is supported.
-Initializes a class-instance object that represents an HLSL class instance.
-The type name of a class to initialize.
Identifies the constant buffer that contains the class data.
The four-component vector offset from the start of the constant buffer where the class data will begin. Consequently, this is not a byte offset.
The texture slot for the first texture; there may be multiple textures following the offset.
The sampler slot for the first sampler; there may be multiple samplers following the offset.
The address of a reference to an
Returns
Instances can be created (or gotten) before or after a shader is created. Use the same shader linkage object to acquire a class instance and create the shader the instance is going to be used in.
For more information about using the
Windows?Phone?8: This API is supported.
-A compute-shader interface manages an executable program (a compute shader) that controls the compute-shader stage.
-The compute-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a compute-shader interface, call
This interface is defined in D3D11.h.
-This interface encapsulates methods for measuring GPU performance.
-A counter can be created with
This is a derived class of
Counter data is gathered by issuing an
Counters are best suited for profiling.
For a list of the types of performance counters, see
Get a counter description.
-Get a counter description.
-Pointer to a counter description (see
The depth-stencil-state interface holds a description for depth-stencil state that you can bind to the output-merger stage.
-To create a depth-stencil-state object, call
Gets the description for depth-stencil state that you used to create the depth-stencil-state object.
-You use the description for depth-stencil state in a call to the
Gets the description for depth-stencil state that you used to create the depth-stencil-state object.
-A reference to a
You use the description for depth-stencil state in a call to the
Describes depth-stencil state.
-Pass a reference to
Depth-stencil state controls how depth-stencil testing is performed by the output-merger stage.
The following table shows the default values of depth-stencil states.
State | Default Value |
---|---|
DepthEnable | TRUE |
DepthWriteMask | |
DepthFunc | |
StencilEnable | |
StencilReadMask | D3D11_DEFAULT_STENCIL_READ_MASK |
StencilWriteMask | D3D11_DEFAULT_STENCIL_WRITE_MASK |
FrontFace.StencilFunc and BackFace.StencilFunc | |
FrontFace.StencilDepthFailOp and BackFace.StencilDepthFailOp | |
FrontFace.StencilPassOp and BackFace.StencilPassOp | |
FrontFace.StencilFailOp and BackFace.StencilFailOp |
?
The formats that support stenciling are
Enable depth testing.
Identify a portion of the depth-stencil buffer that can be modified by depth data (see
A function that compares depth data against existing depth data. The function options are listed in
Enable stencil testing.
Identify a portion of the depth-stencil buffer for reading stencil data.
Identify a portion of the depth-stencil buffer for writing stencil data.
Identify how to use the results of the depth test and the stencil test for pixels whose surface normal is facing towards the camera (see
Identify how to use the results of the depth test and the stencil test for pixels whose surface normal is facing away from the camera (see
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
The device interface represents a virtual adapter; it is used to create resources.
- A device is created using
Windows?Phone?8: This API is supported.
- IDXGIResource* pOtherResource(NULL);
- hr = pOtherDeviceResource->QueryInterface( __uuidof(IDXGIResource), (void**)&pOtherResource );
- HANDLE sharedHandle;
- pOtherResource->GetSharedHandle(&sharedHandle);
- The only resources that can be shared are 2D non-mipmapped textures. To share a resource between a Direct3D 9 device and a Direct3D 10 device the texture must have been created using the pSharedHandle argument of {{CreateTexture}}. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument. The following code illustrates the method calls involved.
- sharedHandle = NULL; // must be set to NULL to create, can use a valid handle here to open in D3D9
- pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle);
- ...
- pDevice10->OpenSharedResource(sharedHandle, __uuidof(ID3D10Resource), (void**)(&tempResource10));
- tempResource10->QueryInterface(__uuidof(ID3D10Texture2D), (void**)(&pTex2D_10));
- tempResource10->Release();
- // now use pTex2D_10 with pDevice10
- Textures being shared from D3D9 to D3D10 have the following restrictions. Textures must be 2D Only 1 mip level is allowed Texture must have default usage Texture must be write only MSAA textures are not allowed Bind flags must have SHADER_RESOURCE and RENDER_TARGET set Only R10G10B10A2_UNORM, R16G16B16A16_FLOAT and R8G8B8A8_UNORM formats are allowed If a shared texture is updated on one device Gets information about the features
Gets information about the features
Gets information about whether the driver supports the nonpowers-of-2-unconditionally feature. TRUE for hardware at Direct3D 10 and higher feature levels.
-Gets information about whether a rendering device batches rendering commands and performs multipass rendering into tiles or bins over a render area. Certain API usage patterns that are fine TileBasedDefferredRenderers (TBDRs) can perform worse on non-TBDRs and vice versa. Applications that are careful about rendering can be friendly to both TBDR and non-TBDR architectures.
-Creates a device that uses Direct3D 11 functionality in Direct3D 12, specifying a pre-existing D3D12 device to use for D3D11 interop.
- Specifies a pre-existing D3D12 device to use for D3D11 interop. May not be
Any of those documented for D3D11CreateDeviceAndSwapChain. Specifies which runtime layers to enable (see the
An array of any of the following:
The first feature level which is less than or equal to the D3D12 device's feature level will be used to perform D3D11 validation. Creation will fail if no acceptable feature levels are provided. Providing
An array of unique queues for D3D11On12 to use. Valid queue types: 3D command queue.
The function signature PFN_D3D11ON12_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
-Gets the feature level of the hardware device.
-Feature levels determine the capabilities of your device.
-Get the flags used during the call to create the device with
Get the reason why the device was removed.
-Gets an immediate context, which can play back command lists.
-The GetImmediateContext method returns an
The GetImmediateContext method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Get or sets the exception-mode flags.
-An exception-mode flag is used to elevate an error condition to a non-continuable exception.
-Creates a buffer (vertex buffer, index buffer, or shader-constant buffer).
- A reference to a
A reference to a
If you don't pass anything to pInitialData, the initial content of the memory for the buffer is undefined. In this case, you need to write the buffer content some other way before the resource is read.
Address of a reference to the
This method returns E_OUTOFMEMORY if there is insufficient memory to create the buffer. See Direct3D 11 Return Codes for other possible return values.
For example code, see How to: Create a Vertex Buffer, How to: Create an Index Buffer or How to: Create a Constant Buffer.
For a constant buffer (BindFlags of
The Direct3D 11.1 runtime, which is available on Windows?8 and later operating systems, provides the following new functionality for CreateBuffer:
You can create a constant buffer that is larger than the maximum constant buffer size that a shader can access (4096 32-bit*4-component constants ? 64KB). When you bind the constant buffer to the pipeline (for example, via PSSetConstantBuffers or PSSetConstantBuffers1), you can define a range of the buffer that the shader can access that fits within the 4096 constant limit.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher. On existing drivers that are implemented to feature level 10 and higher, a call to CreateBuffer to request a constant buffer that is larger than 4096 fails.
-Creates an array of 1D textures.
-If the method succeeds, the return code is
CreateTexture1D creates a 1D texture resource, which can contain a number of 1D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 width texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Create an array of 2D textures.
-If the method succeeds, the return code is
CreateTexture2D creates a 2D texture resource, which can contain a number of 2D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 x 32 texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Create a single 3D texture.
-If the method succeeds, the return code is
CreateTexture3D creates a 3D texture resource, which can contain a number of 3D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
Each element of pInitialData provides all of the slices that are defined for a given miplevel. For example, for a 32 x 32 x 4 volume texture with a full mipmap chain, the array has the following 6 elements:
Create a shader-resource view for accessing data in a resource.
- Pointer to the resource that will serve as input to a shader. This resource must have been created with the
Pointer to a shader-resource view description (see
Address of a reference to an
This method returns one of the following Direct3D 11 Return Codes.
A resource is made up of one or more subresources; a view identifies which subresources to allow the pipeline to access. In addition, each resource is bound to the pipeline using a view. A shader-resource view is designed to bind any buffer or texture resource to the shader stages using the following API methods:
Because a view is fully typed, this means that typeless resources become fully typed when bound to the pipeline.
Note?? To successfully create a shader-resource view from a typeless buffer (for example,The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateShaderResourceView for the following new purpose.
You can create shader-resource views of video resources so that Direct3D shaders can process those shader-resource views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
Windows?Phone?8: This API is supported.
-Creates a view for accessing an unordered access resource.
-This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateUnorderedAccessView for the following new purpose.
You can create unordered-access views of video resources so that Direct3D shaders can process those unordered-access views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
-Creates a render-target view for accessing resource data.
-Pointer to a
Pointer to a
Address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
A render-target view can be bound to the output-merger stage by calling
The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateRenderTargetView for the following new purpose.
You can create render-target views of video resources so that Direct3D shaders can process those render-target views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
-Create a depth-stencil view for accessing resource data.
-Pointer to the resource that will serve as the depth-stencil surface. This resource must have been created with the
Pointer to a depth-stencil-view description (see
Address of a reference to an
This method returns one of the following Direct3D 11 Return Codes.
A depth-stencil view can be bound to the output-merger stage by calling
Create an input-layout object to describe the input-buffer data for the input-assembler stage.
- An array of the input-assembler stage input data types; each type is described by an element description (see
The number of input-data types in the array of input-elements.
A reference to the compiled shader. The compiled shader code contains a input signature which is validated against the array of elements. See remarks.
Size of the compiled shader.
A reference to the input-layout object created (see
If the method succeeds, the return code is
After creating an input layout object, it must be bound to the input-assembler stage before calling a draw API.
Once an input-layout object is created from a shader signature, the input-layout object can be reused with any other shader that has an identical input signature (semantics included). This can simplify the creation of input-layout objects when you are working with many shaders with identical inputs.
If a data type in the input-layout declaration does not match the data type in a shader-input signature, CreateInputLayout will generate a warning during compilation. The warning is simply to call attention to the fact that the data may be reinterpreted when read from a register. You may either disregard this warning (if reinterpretation is intentional) or make the data types match in both declarations to eliminate the warning.
Windows?Phone?8: This API is supported.
-Create a vertex-shader object from a compiled shader.
-A reference to the compiled shader.
Size of the compiled vertex shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateVertexShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a vertex shader, you can successfully pass the compiled vertex shader to pShaderBytecode. That is, the call to CreateVertexShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateVertexShader fails. CreateVertexShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a geometry shader.
-A reference to the compiled shader.
Size of the compiled geometry shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the following Direct3D 11 Return Codes.
After it is created, the shader can be set to the device by calling
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateGeometryShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a geometry shader, you can successfully pass the compiled geometry shader to pShaderBytecode. That is, the call to CreateGeometryShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateGeometryShader fails. CreateGeometryShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Creates a geometry shader that can write to streaming output buffers.
-A reference to the compiled geometry shader for a standard geometry shader plus stream output. For info on how to get this reference, see Getting a Pointer to a Compiled Shader.
To create the stream output without using a geometry shader, pass a reference to the output signature for the prior stage. To obtain this output signature, call the
Size of the compiled geometry shader.
Pointer to a
The number of entries in the stream output declaration ( ranges from 0 to
An array of buffer strides; each stride is the size of an element for that buffer.
The number of strides (or buffers) in pBufferStrides (ranges from 0 to
The index number of the stream to be sent to the rasterizer stage (ranges from 0 to
A reference to a class linkage interface (see
Address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
For more info about using CreateGeometryShaderWithStreamOutput, see Create a Geometry-Shader Object with Stream Output.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateGeometryShaderWithStreamOutput.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a geometry shader, you can successfully pass the compiled geometry shader to pShaderBytecode. That is, the call to CreateGeometryShaderWithStreamOutput succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateGeometryShaderWithStreamOutput fails. CreateGeometryShaderWithStreamOutput also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Windows?Phone?8: This API is supported.
-Create a pixel shader.
-A reference to the compiled shader.
Size of the compiled pixel shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the following Direct3D 11 Return Codes.
After creating the pixel shader, you can set it to the device using
Create a hull shader.
-This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateHullShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a hull shader, you can successfully pass the compiled hull shader to pShaderBytecode. That is, the call to CreateHullShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateHullShader fails. CreateHullShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a domain shader .
-This method returns one of the following Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateDomainShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a domain shader, you can successfully pass the compiled domain shader to pShaderBytecode. That is, the call to CreateDomainShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateDomainShader fails. CreateDomainShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a compute shader.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the compute shader. See Direct3D 11 Return Codes for other possible return values.
For an example, see How To: Create a Compute Shader and HDRToneMappingCS11 Sample.
-Creates class linkage libraries to enable dynamic shader linkage.
-A reference to a class-linkage interface reference (see
This method returns one of the following Direct3D 11 Return Codes.
The
Create a blend-state object that encapsules blend state for the output-merger stage.
- Pointer to a blend-state description (see
Address of a reference to the blend-state object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the blend-state object. See Direct3D 11 Return Codes for other possible return values.
An application can create up to 4096 unique blend-state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
Windows?Phone?8: This API is supported.
-Create a depth-stencil state object that encapsulates depth-stencil test information for the output-merger stage.
-Pointer to a depth-stencil state description (see
Address of a reference to the depth-stencil state object created (see
This method returns one of the following Direct3D 11 Return Codes.
4096 unique depth-stencil state objects can be created on a device at a time.
If an application attempts to create a depth-stencil-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique depth-stencil state objects will stay the same.
-Create a rasterizer state object that tells the rasterizer stage how to behave.
-Pointer to a rasterizer state description (see
Address of a reference to the rasterizer state object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the compute shader. See Direct3D 11 Return Codes for other possible return values.
4096 unique rasterizer state objects can be created on a device at a time.
If an application attempts to create a rasterizer-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique rasterizer state objects will stay the same.
-Create a sampler-state object that encapsulates sampling information for a texture.
-Pointer to a sampler state description (see
Address of a reference to the sampler state object created (see
This method returns one of the following Direct3D 11 Return Codes.
4096 unique sampler state objects can be created on a device at a time.
If an application attempts to create a sampler-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique sampler state objects will stay the same.
-This interface encapsulates methods for querying information from the GPU.
-Pointer to a query description (see
Address of a reference to the query object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the query object. See Direct3D 11 Return Codes for other possible return values.
Creates a predicate.
-Pointer to a query description where the type of query must be a
Address of a reference to a predicate (see
This method returns one of the following Direct3D 11 Return Codes.
Create a counter object for measuring GPU performance.
-Pointer to a counter description (see
Address of a reference to a counter (see
If this function succeeds, it will return
E_INVALIDARG is returned whenever an out-of-range well-known or device-dependent counter is requested, or when the simulataneously active counters have been exhausted.
Creates a deferred context, which can record command lists.
-Reserved for future use. Pass 0.
Upon completion of the method, the passed reference to an
Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. Using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
Windows?Phone?8: This API is supported.
-Give a device access to a shared resource created on a different device.
-A resource handle. See remarks.
The globally unique identifier (
Address of a reference to the resource we are gaining access to.
This method returns one of the following Direct3D 11 Return Codes.
The REFIID, or
The unique handle of the resource is obtained differently depending on the type of device that originally created the resource.
To share a resource between two Direct3D 11 devices the resource must have been created with the
The REFIID, or
When sharing a resource between two Direct3D 10/11 devices the unique handle of the resource can be obtained by querying the resource for the
* pOtherResource( null ); - hr = pOtherDeviceResource->QueryInterface( __uuidof(), (void**)&pOtherResource ); - HANDLE sharedHandle; - pOtherResource->GetSharedHandle(&sharedHandle);
The only resources that can be shared are 2D non-mipmapped textures.
To share a resource between a Direct3D 9 device and a Direct3D 11 device the texture must have been created using the pSharedHandle argument of CreateTexture. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument.
The following code illustrates the method calls involved.
sharedHandle =null ; // must be set tonull to create, can use a valid handle here to open in D3D9 - pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle); - ... - pDevice11->OpenSharedResource(sharedHandle, __uuidof(), (void**)(&tempResource11)); - tempResource11->QueryInterface(__uuidof( ), (void**)(&pTex2D_11)); - tempResource11->Release(); - // now use pTex2D_11 with pDevice11
Textures being shared from D3D9 to D3D11 have the following restrictions.
If a shared texture is updated on one device
Get the support of a given format on the installed video device.
-A
A bitfield of
Get the number of quality levels available during multisampling.
-The texture format. See
The number of samples during multisampling.
Number of quality levels supported by the adapter. See remarks.
When multisampling a texture, the number of quality levels available for an adapter is dependent on the texture format used and the number of samples requested. The maximum number of quality levels is defined by
Furthermore, the definition of a quality level is up to each hardware vendor to define, however no facility is provided by Direct3D to help discover this information.
Note that FEATURE_LEVEL_10_1 devices are required to support 4x MSAA for all render targets except R32G32B32A32 and R32G32B32. FEATURE_LEVEL_11_0 devices are required to support 4x MSAA for all render target formats, and 8x MSAA for all render target formats except R32G32B32A32 formats.
-Get a counter's information.
-Get the type, name, units of measure, and a description of an existing counter.
- Pointer to a counter description (see
Pointer to the data type of a counter (see
Pointer to the number of hardware counters that are needed for this counter type to be created. All instances of the same counter type use the same hardware counters.
String to be filled with a brief name for the counter. May be
Length of the string returned to szName. Can be
Name of the units a counter measures, provided the memory the reference points to has enough room to hold the string. Can be
Length of the string returned to szUnits. Can be
A description of the counter, provided the memory the reference points to has enough room to hold the string. Can be
Length of the string returned to szDescription. Can be
This method returns one of the following Direct3D 11 Return Codes.
Length parameters can be
Windows?Phone?8: This API is supported.
-Gets information about the features that are supported by the current graphics driver.
-A member of the
Upon completion of the method, the passed structure is filled with data that describes the feature support.
The size of the structure passed to the pFeatureSupportData parameter.
Returns
To query for multi-threading support, pass the
Calling CheckFeatureSupport with Feature set to
Get application-defined data from a device.
-Guid associated with the data.
A reference to a variable that on input contains the size, in bytes, of the buffer that pData points to, and on output contains the size, in bytes, of the amount of data that GetPrivateData retrieved.
A reference to a buffer that GetPrivateData fills with data from the device if pDataSize points to a value that specifies a buffer large enough to hold the data.
This method returns one of the codes described in the topic Direct3D 11 Return Codes.
Set data to a device and associate that data with a guid.
-Guid associated with the data.
Size of the data.
Pointer to the data to be stored with this device. If pData is
This method returns one of the following Direct3D 11 Return Codes.
The data stored in the device with this method can be retrieved with
The data and guid set with this method will typically be application-defined.
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(-, sizeof( c_szName ) - 1, c_szName ); -
Associate an
Guid associated with the interface.
Pointer to an
This method returns one of the following Direct3D 11 Return Codes.
Gets the feature level of the hardware device.
-A member of the
Feature levels determine the capabilities of your device.
-Get the flags used during the call to create the device with
A bitfield containing the flags used to create the device. See
Get the reason why the device was removed.
-Possible return values include:
For more detail on these return codes, see DXGI_ERROR.
Gets an immediate context, which can play back command lists.
-Upon completion of the method, the passed reference to an
The GetImmediateContext method returns an
The GetImmediateContext method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Get the exception-mode flags.
-A value that contains one or more exception flags; each flag specifies a condition which will cause an exception to be raised. The flags are listed in D3D11_RAISE_FLAG. A default value of 0 means there are no flags.
This method returns one of the following Direct3D 11 Return Codes.
Set an exception-mode flag to elevate an error condition to a non-continuable exception.
Whenever an error occurs, a Direct3D device enters the DEVICEREMOVED state and if the appropriate exception flag has been set, an exception is raised. A raised exception is designed to terminate an application. Before termination, the last chance an application has to persist data is by using an UnhandledExceptionFilter (see Structured Exception Handling). In general, UnhandledExceptionFilters are leveraged to try to persist data when an application is crashing (to disk, for example). Any code that executes during an UnhandledExceptionFilter is not guaranteed to reliably execute (due to possible process corruption). Any data that the UnhandledExceptionFilter manages to persist, before the UnhandledExceptionFilter crashes again, should be treated as suspect, and therefore inspected by a new, non-corrupted process to see if it is usable.
-Get the exception-mode flags.
-A value that contains one or more exception flags; each flag specifies a condition which will cause an exception to be raised. The flags are listed in D3D11_RAISE_FLAG. A default value of 0 means there are no flags.
An exception-mode flag is used to elevate an error condition to a non-continuable exception.
-The device interface represents a virtual adapter; it is used to create resources.
{ , , , , , , ,};
- Gets an immediate context, which can play back command lists.
-GetImmediateContext1 returns an
GetImmediateContext1 increments the reference count of the immediate context by one. So, call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Gets an immediate context, which can play back command lists.
-Upon completion of the method, the passed reference to an
GetImmediateContext1 returns an
GetImmediateContext1 increments the reference count of the immediate context by one. So, call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
-Reserved for future use. Pass 0.
Upon completion of the method, the passed reference to an
Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. By using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
Windows?Phone?8: This API is supported.
-Creates a blend-state object that encapsulates blend state for the output-merger stage and allows the configuration of logic operations.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the blend-state object. See Direct3D 11 Return Codes for other possible return values.
The logical operations (those that enable bitwise logical operations between pixel shader output and render target contents, refer to
An app can create up to 4096 unique blend-state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
-Creates a rasterizer state object that informs the rasterizer stage how to behave and forces the sample count while UAV rendering or rasterizing.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the rasterizer state object. See Direct3D 11 Return Codes for other possible return values.
An app can create up to 4096 unique rasterizer state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
-Creates a context state object that holds all Microsoft Direct3D state and some Direct3D behavior.
- A combination of
If you set the single-threaded flag for both the context state object and the device, you guarantee that you will call the whole set of context methods and device methods only from one thread. You therefore do not need to use critical sections to synchronize access to the device context, and the runtime can avoid working with those processor-intensive critical sections.
A reference to an array of
{, , , , , , ,};
The number of elements in pFeatureLevels. Unlike
The SDK version. You must set this parameter to
The globally unique identifier (
A reference to a variable that receives a
The address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
The REFIID value of the emulated interface is a __uuidof(
gets the
Call the
When a context state object is active, the runtime disables certain methods on the device and context interfaces. For example, a context state object that is created with __uuidof(
will cause the runtime to turn off most of the Microsoft Direct3D?10 device interfaces, and a context state object that is created with __uuidof(ID3D10Device1)
or __uuidof(ID3D10Device)
will cause the runtime to turn off most of the
For example, suppose the tessellation stage is made active through the
The following table shows the methods that are active and inactive for each emulated interface.
Emulated interface | Active device or immediate context interfaces | Inactive device or immediate context interfaces |
---|---|---|
| | ID3D10Device |
ID3D10Device1 or ID3D10Device | ID3D10Device ID3D10Device1 | |
?
The following table shows the immediate context methods that the runtime disables when the indicated context state objects are active.
Methods of __uuidof(ID3D10Device1) or __uuidof(ID3D10Device) is active | Methods of ID3D10Device when __uuidof( is active |
---|---|
ClearDepthStencilView | ClearDepthStencilView |
ClearRenderTargetView | ClearRenderTargetView |
ClearState | ClearState |
ClearUnorderedAccessViewUint | |
ClearUnorderedAccessViewFloat | |
CopyResource | CopyResource |
CopyStructureCount | |
CopySubresourceRegion | CopySubresourceRegion |
CSGetConstantBuffers | |
CSGetSamplers | |
CSGetShader | |
CSGetShaderResources | |
CSGetUnorderedAccessViews | |
CSSetConstantBuffers | |
CSSetSamplers | |
CSSetShader | |
CSSetShaderResources | |
CSSetUnorderedAccessViews | |
Dispatch | |
DispatchIndirect | |
CreateBlendState | |
Draw | Draw |
DrawAuto | DrawAuto |
DrawIndexed | DrawIndexed |
DrawIndexedInstanced | DrawIndexedInstanced |
DrawIndexedInstancedIndirect | |
DrawInstanced | DrawInstanced |
DrawInstancedIndirect | |
DSGetConstantBuffers | |
DSGetSamplers | |
DSGetShader | |
DSGetShaderResources | |
DSSetConstantBuffers | |
DSSetSamplers | |
DSSetShader | |
DSSetShaderResources | |
ExecuteCommandList | |
FinishCommandList | |
Flush | Flush |
GenerateMips | GenerateMips |
GetPredication | GetPredication |
GetResourceMinLOD | |
GetType | |
GetTextFilterSize | |
GSGetConstantBuffers | GSGetConstantBuffers |
GSGetSamplers | GSGetSamplers |
GSGetShader | GSGetShader |
GSGetShaderResources | GSGetShaderResources |
GSSetConstantBuffers | GSSetConstantBuffers |
GSSetSamplers | GSSetSamplers |
GSSetShader | GSSetShader |
GSSetShaderResources | GSSetShaderResources |
HSGetConstantBuffers | |
HSGetSamplers | |
HSGetShader | |
HSGetShaderResources | |
HSSetConstantBuffers | |
HSSetSamplers | |
HSSetShader | |
HSSetShaderResources | |
IAGetIndexBuffer | IAGetIndexBuffer |
IAGetInputLayout | IAGetInputLayout |
IAGetPrimitiveTopology | IAGetPrimitiveTopology |
IAGetVertexBuffers | IAGetVertexBuffers |
IASetIndexBuffer | IASetIndexBuffer |
IASetInputLayout | IASetInputLayout |
IASetPrimitiveTopology | IASetPrimitiveTopology |
IASetVertexBuffers | IASetVertexBuffers |
OMGetBlendState | OMGetBlendState |
OMGetDepthStencilState | OMGetDepthStencilState |
OMGetRenderTargets | OMGetRenderTargets |
OMGetRenderTargetsAndUnorderedAccessViews | |
OMSetBlendState | OMSetBlendState |
OMSetDepthStencilState | OMSetDepthStencilState |
OMSetRenderTargets | OMSetRenderTargets |
OMSetRenderTargetsAndUnorderedAccessViews | |
PSGetConstantBuffers | PSGetConstantBuffers |
PSGetSamplers | PSGetSamplers |
PSGetShader | PSGetShader |
PSGetShaderResources | PSGetShaderResources |
PSSetConstantBuffers | PSSetConstantBuffers |
PSSetSamplers | PSSetSamplers |
PSSetShader | PSSetShader |
PSSetShaderResources | PSSetShaderResources |
ResolveSubresource | ResolveSubresource |
RSGetScissorRects | RSGetScissorRects |
RSGetState | RSGetState |
RSGetViewports | RSGetViewports |
RSSetScissorRects | RSSetScissorRects |
RSSetState | RSSetState |
RSSetViewports | RSSetViewports |
SetPredication | SetPredication |
SetResourceMinLOD | |
SetTextFilterSize | |
SOGetTargets | SOGetTargets |
SOSetTargets | SOSetTargets |
UpdateSubresource | UpdateSubresource |
VSGetConstantBuffers | VSGetConstantBuffers |
VSGetSamplers | VSGetSamplers |
VSGetShader | VSGetShader |
VSGetShaderResources | VSGetShaderResources |
VSSetConstantBuffers | VSSetConstantBuffers |
VSSetSamplers | VSSetSamplers |
VSSetShader | VSSetShader |
VSSetShaderResources | VSSetShaderResources |
?
The following table shows the immediate context methods that the runtime does not disable when the indicated context state objects are active.
Methods of __uuidof(ID3D10Device1) or __uuidof(ID3D10Device) is active | Methods of ID3D10Device when __uuidof( is active |
---|---|
Begin | |
End | |
GetCreationFlags | |
GetPrivateData | |
GetContextFlags | |
GetData | |
Map | |
Unmap |
?
The following table shows the ID3D10Device interface methods that the runtime does not disable because they are not immediate context methods.
Methods of ID3D10Device |
---|
CheckCounter |
CheckCounterInfo |
Create*, like CreateQuery |
GetDeviceRemovedReason |
GetExceptionMode |
OpenSharedResource |
SetExceptionMode |
SetPrivateData |
SetPrivateDataInterface |
?
Windows?Phone?8: This API is supported.
-Give a device access to a shared resource created on a different device.
-A resource handle. See remarks.
The globally unique identifier (
Address of a reference to the resource we are gaining access to.
This method returns one of the following Direct3D 11 Return Codes.
The REFIID, or
The unique handle of the resource is obtained differently depending on the type of device that originally created the resource.
To share a resource between two Direct3D 11 devices the resource must have been created with the
The REFIID, or
When sharing a resource between two Direct3D 10/11 devices the unique handle of the resource can be obtained by querying the resource for the
* pOtherResource( null ); - hr = pOtherDeviceResource->QueryInterface( __uuidof(), (void**)&pOtherResource ); - HANDLE sharedHandle; - pOtherResource->GetSharedHandle(&sharedHandle);
The only resources that can be shared are 2D non-mipmapped textures.
To share a resource between a Direct3D 9 device and a Direct3D 11 device the texture must have been created using the pSharedHandle argument of CreateTexture. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument.
The following code illustrates the method calls involved.
sharedHandle =null ; // must be set tonull to create, can use a valid handle here to open in D3D9 - pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle); - ... - pDevice11->OpenSharedResource(sharedHandle, __uuidof(), (void**)(&tempResource11)); - tempResource11->QueryInterface(__uuidof( ), (void**)(&pTex2D_11)); - tempResource11->Release(); - // now use pTex2D_11 with pDevice11
Textures being shared from D3D9 to D3D11 have the following restrictions.
If a shared texture is updated on one device
Gives a device access to a shared resource that is referenced by name and that was created on a different device. You must have previously created the resource as shared and specified that it uses NT handles (that is, you set the
This method returns one of the Direct3D 11 return codes. This method also returns E_ACCESSDENIED if the permissions to access the resource aren't valid.
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, OpenSharedResourceByName fails with E_NOTIMPL because NTHANDLES are used. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
The behavior of OpenSharedResourceByName is similar to the behavior of the
To share a resource between two devices
The device interface represents a virtual adapter; it is used to create resources.
Gets an immediate context, which can play back command lists.
-The GetImmediateContext2 method returns an
The GetImmediateContext2 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Gets an immediate context, which can play back command lists.
-The GetImmediateContext2 method returns an
The GetImmediateContext2 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
- Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. By using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
-Gets info about how a tiled resource is broken into tiles.
-A reference to the tiled resource to get info about.
A reference to a variable that receives the number of tiles needed to store the entire tiled resource.
A reference to a
A reference to a
A reference to a variable that contains the number of tiles in the subresource. On input, this is the number of subresources to query tilings for; on output, this is the number that was actually retrieved at pSubresourceTilingsForNonPackedMips (clamped to what's available).
The number of the first subresource tile to get. GetResourceTiling ignores this parameter if the number that pNumSubresourceTilings points to is 0.
A reference to a
If subresource tiles are part of packed mipmaps, GetResourceTiling sets the members of
For more info about tiled resources, see Tiled resources.
-Get the number of quality levels available during multisampling.
-The texture format during multisampling.
The number of samples during multisampling.
A combination of D3D11_CHECK_MULTISAMPLE_QUALITY_LEVELS_FLAGS values that are combined by using a bitwise OR operation. Currently, only
A reference to a variable the receives the number of quality levels supported by the adapter. See Remarks.
When you multisample a texture, the number of quality levels available for an adapter is dependent on the texture format that you use and the number of samples that you request. The maximum number of quality levels is defined by
Furthermore, the definition of a quality level is up to each hardware vendor to define, however no facility is provided by Direct3D to help discover this information.
Note that FEATURE_LEVEL_10_1 devices are required to support 4x MSAA for all render targets except R32G32B32A32 and R32G32B32. FEATURE_LEVEL_11_0 devices are required to support 4x MSAA for all render target formats, and 8x MSAA for all render target formats except R32G32B32A32 formats.
-The device interface represents a virtual adapter; it is used to create resources.
Gets an immediate context, which can play back command lists.
- The GetImmediateContext3 method outputs an
The GetImmediateContext3 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a 2D texture.
-If the method succeeds, the return code is
CreateTexture2D1 creates a 2D texture resource, which can contain a number of 2D subresources. The number of subresources is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 x 32 texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Creates a 3D texture.
-If the method succeeds, the return code is
CreateTexture3D1 creates a 3D texture resource, which can contain a number of 3D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
Each element of pInitialData provides all of the slices that are defined for a given miplevel. For example, for a 32 x 32 x 4 volume texture with a full mipmap chain, the array has the following 6 elements:
Creates a rasterizer state object that informs the rasterizer stage how to behave and forces the sample count while UAV rendering or rasterizing.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the rasterizer state object. See Direct3D 11 Return Codes for other possible return values.
Creates a shader-resource view for accessing data in a resource.
-Pointer to the resource that will serve as input to a shader. This resource must have been created with the
A reference to a
A reference to a memory block that receives a reference to a
This method returns E_OUTOFMEMORY if there is insufficient memory to create the shader-resource view. See Direct3D 11 Return Codes for other possible return values.
Creates a view for accessing an unordered access resource.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the unordered-access view. See Direct3D 11 Return Codes for other possible return values.
Creates a render-target view for accessing resource data.
-Pointer to a
Pointer to a
A reference to a memory block that receives a reference to a
This method returns one of the Direct3D 11 Return Codes.
A render-target view can be bound to the output-merger stage by calling
Creates a query object for querying information from the graphics processing unit (GPU).
-Pointer to a
A reference to a memory block that receives a reference to a
This method returns E_OUTOFMEMORY if there is insufficient memory to create the query object. See Direct3D 11 Return Codes for other possible return values.
Gets an immediate context, which can play back command lists.
- The GetImmediateContext3 method outputs an
The GetImmediateContext3 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
- Returns
Copies data into a
The provided resource must be a
This API is intended for calling at high frequency. Callers can reduce memory by making iterative calls that update progressive regions of the texture, while provide a small buffer during each call. It is most efficient to specify large enough regions, though, because this enables D3D to fill whole cache lines in the texture before returning.
For efficiency, ensure the bounds and alignment of the extents within the box are ( 64 / [bytes per pixel] ) pixels horizontally. Vertical bounds and alignment should be 2 rows, except when 1-byte-per-pixel formats are used, in which case 4 rows are recommended. Single depth slices per call are handled efficiently. It is recommended but not necessary to provide references and strides which are 128-byte aligned.
When writing to sub mipmap levels, it is recommended to use larger width and heights than described above. This is because small mipmap levels may actually be stored within a larger block of memory, with an opaque amount of offsetting which can interfere with alignment to cache lines.
- Copies data from a
The provided resource must be a
This API is intended for calling at high frequency. Callers can reduce memory by making iterative calls that update progressive regions of the texture, while provide a small buffer during each call. It is most efficient to specify large enough regions, though, because this enables D3D to fill whole cache lines in the texture before returning.
For efficiency, ensure the bounds and alignment of the extents within the box are ( 64 / [Bytes per pixel] ) pixels horizontally. Vertical bounds and alignment should be 2 rows, except when 1-byte-per-pixel formats are used, in which case 4 rows are recommended. Single depth slices per call are handled efficiently. It is recommended but not necessary to provide references and strides which are 128-byte aligned.
When reading from sub mipmap levels, it is recommended to use larger width and heights than described above. This is because small mipmap levels may actually be stored within a larger block of memory, with an opaque amount of offseting which can interfere with alignment to cache lines.
-The device interface represents a virtual adapter; it is used to create resources.
Note??The latest version of this interface is A device is created using
Windows?Phone?8: This API is supported.
-The device interface represents a virtual adapter; it is used to create resources.
A device-child interface accesses data used by a device.
-There are several types of device child interfaces, all of which inherit this interface. They include shaders, state objects, and input layouts.
Windows?Phone?8: This API is supported.
-Get a reference to the device that created this interface.
-Any returned interfaces will have their reference count incremented by one, so be sure to call ::release() on the returned reference(s) before they are freed or else you will have a memory leak.
-Get a reference to the device that created this interface.
-Address of a reference to a device (see
Any returned interfaces will have their reference count incremented by one, so be sure to call ::release() on the returned reference(s) before they are freed or else you will have a memory leak.
-Get application-defined data from a device child.
-Guid associated with the data.
A reference to a variable that on input contains the size, in bytes, of the buffer that pData points to, and on output contains the size, in bytes, of the amount of data that GetPrivateData retrieved.
A reference to a buffer that GetPrivateData fills with data from the device child if pDataSize points to a value that specifies a buffer large enough to hold the data.
This method returns one of the Direct3D 11 Return Codes.
The data stored in the device child is set by calling
Windows?Phone?8: This API is supported.
-Set application-defined data to a device child and associate that data with an application-defined guid.
-Guid associated with the data.
Size of the data.
Pointer to the data to be stored with this device child. If pData is
This method returns one of the following Direct3D 11 Return Codes.
The data stored in the device child with this method can be retrieved with
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(-, sizeof( c_szName ) - 1, c_szName ); -
Associate an
Guid associated with the interface.
Pointer to an
This method returns one of the following Direct3D 11 Return Codes.
When this method is called ::addref() will be called on the
The
Bind an array of shader resources to the compute-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10. -
-Sets an array of views for an unordered resource.
-Index of the first element in the zero-based array to begin setting (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - 1). D3D11_1_UAV_SLOT_COUNT is defined as 64.
Number of views to set (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - StartSlot).
A reference to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is only relevant for UAVs that were created with either
Windows?Phone?8: This API is supported.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set an array of sampler states to the compute-shader stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the compute-shader stage.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the CSSetConstantBuffers1 method instead.
-Get the compute-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Gets an array of views for an unordered resource.
-Index of the first element in the zero-based array to return (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - 1).
Number of views to get (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - StartSlot).
A reference to an array of interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the compute shader currently set on the device.
-Address of a reference to a Compute shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the compute-shader stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the compute-shader stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-The
D3D11_BOX sourceRegion;
- sourceRegion.left = 120;
- sourceRegion.right = 200;
- sourceRegion.top = 100;
- sourceRegion.bottom = 220;
- sourceRegion.front = 0;
- sourceRegion.back = 1; pd3dDeviceContext->CopySubresourceRegion( pDestTexture, 0, 10, 20, 0, pSourceTexture, 0, &sourceRegion );
-
- Notice, that for a 2D texture, front and back are set to 0 and 1 respectively.
- Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
-A reference to a
Index number of the subresource.
Specifies the CPU's read and write permissions for a resource. For possible values, see
Flag that specifies what the CPU should do when the GPU is busy. This flag is optional.
A reference to the mapped subresource (see
This method also throws an exception with the code
For more information about these error codes, see DXGI_ERROR.
If you call Map on a deferred context, you can only pass
The Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, can map shader resource views (SRVs) of dynamic buffers with
Gets the type of device context.
-Gets the initialization flags associated with the current deferred context.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Draw indexed, non-instanced primitives.
-Number of indices to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A draw API submits work to the rendering pipeline.
If the sum of both indices is negative, the result of the function call is undefined.
-Draw non-indexed, non-instanced primitives.
-Number of vertices to draw.
Index of the first vertex, which is usually an offset in a vertex buffer.
Draw submits work to the rendering pipeline.
The vertex data for a draw call normally comes from a vertex buffer that is bound to the pipeline.
Even without any vertex buffer bound to the pipeline, you can generate your own vertex data in your vertex shader by using the SV_VertexID system-value semantic to determine the current vertex that the runtime is processing.
-Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
-This method returns one of the Direct3D 11 Return Codes.
This method also returns
This method also returns
For more information about these error codes, see DXGI_ERROR.
If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Invalidate the reference to a resource and reenable the GPU's access to that resource.
- A reference to a
A subresource to be unmapped.
For info about how to use Unmap, see How to: Use dynamic resources.
Windows?Phone?8: This API is supported.
-Draw indexed, instanced primitives.
-Number of indices read from the index buffer for each instance.
Number of instances to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A value added to each index before reading per-instance data from a vertex buffer.
A draw API submits work to the rendering pipeline.
Instancing may extend performance by reusing the same geometry to draw multiple objects in a scene. One example of instancing could be to draw the same object with different positions and colors. Instancing requires multiple vertex buffers: at least one for per-vertex data and a second buffer for per-instance data.
-Draw non-indexed, instanced primitives.
-Number of vertices to draw.
Number of instances to draw.
Index of the first vertex.
A value added to each index before reading per-instance data from a vertex buffer.
A draw API submits work to the rendering pipeline.
Instancing may extend performance by reusing the same geometry to draw multiple objects in a scene. One example of instancing could be to draw the same object with different positions and colors.
The vertex data for an instanced draw call normally comes from a vertex buffer that is bound to the pipeline. However, you could also provide the vertex data from a shader that has instanced data identified with a system-value semantic (SV_InstanceID).
-Mark the beginning of a series of commands.
-A reference to an
Use
Mark the end of a series of commands.
-A reference to an
Use
Get data from the graphics processing unit (GPU) asynchronously.
-A reference to an
Address of memory that will receive the data. If
Size of the data to retrieve or 0. Must be 0 when pData is
Optional flags. Can be 0 or any combination of the flags enumerated by
This method returns one of the Direct3D 11 Return Codes. A return value of
Queries in a deferred context are limited to predicated drawing. That is, you cannot call
GetData retrieves the data that the runtime collected between calls to
If DataSize is 0, GetData is only used to check status.
An application gathers counter data by calling
Set a rendering predicate.
-A reference to the
If TRUE, rendering will be affected by when the predicate's conditions are met. If
The predicate must be in the "issued" or "signaled" state to be used for predication. While the predicate is set for predication, calls to
Use this method to denote that subsequent rendering and resource manipulation commands are not actually performed if the resulting predicate data of the predicate is equal to the PredicateValue. However, some predicates are only hints, so they may not actually prevent operations from being performed.
The primary usefulness of predication is to allow an application to issue rendering and resource manipulation commands without taking the performance hit of spinning, waiting for
Rendering and resource manipulation commands for Direct3D?11 include these Draw, Dispatch, Copy, Update, Clear, Generate, and Resolve operations.
You can set a rendering predicate on an immediate or a deferred context. For info about immediate and deferred contexts, see Immediate and Deferred Rendering.
-Draw geometry of an unknown size.
-A draw API submits work to the rendering pipeline. This API submits work of an unknown size that was processed by the input assembler, vertex shader, and stream-output stages; the work may or may not have gone through the geometry-shader stage.
After data has been streamed out to stream-output stage buffers, those buffers can be again bound to the Input Assembler stage at input slot 0 and DrawAuto will draw them without the application needing to know the amount of data that was written to the buffers. A measurement of the amount of data written to the SO stage buffers is maintained internally when the data is streamed out. This means that the CPU does not need to fetch the measurement before re-binding the data that was streamed as input data. Although this amount is tracked internally, it is still the responsibility of applications to use input layouts to describe the format of the data in the SO stage buffers so that the layouts are available when the buffers are again bound to the input assembler.
The following diagram shows the DrawAuto process.
Calling DrawAuto does not change the state of the streaming-output buffers that were bound again as inputs.
DrawAuto only works when drawing with one input buffer bound as an input to the IA stage at slot 0. Applications must create the SO buffer resource with both binding flags,
This API does not support indexing or instancing.
If an application needs to retrieve the size of the streaming-output buffer, it can query for statistics on streaming output by using
Draw indexed, instanced, GPU-generated primitives.
- A reference to an
Offset in pBufferForArgs to the start of the GPU generated primitives.
When an application creates a buffer that is associated with the
Windows?Phone?8: This API is supported.
-Draw instanced, GPU-generated primitives.
-A reference to an
Offset in pBufferForArgs to the start of the GPU generated primitives.
When an application creates a buffer that is associated with the
Execute a command list from a thread group.
-The number of groups dispatched in the x direction. ThreadGroupCountX must be less than or equal to
The number of groups dispatched in the y direction. ThreadGroupCountY must be less than or equal to
The number of groups dispatched in the z direction. ThreadGroupCountZ must be less than or equal to
You call the Dispatch method to execute commands in a compute shader. A compute shader can be run on many threads in parallel, within a thread group. Index a particular thread, within a thread group using a 3D vector given by (x,y,z).
In the following illustration, assume a thread group with 50 threads where the size of the group is given by (5,5,2). A single thread is identified from a thread group with 50 threads in it, using the vector (4,1,1).
The following illustration shows the relationship between the parameters passed to
Execute a command list over one or more thread groups.
-A reference to an
A byte-aligned offset between the start of the buffer and the arguments.
You call the DispatchIndirect method to execute commands in a compute shader.
When an application creates a buffer that is associated with the
Copy a region from a source resource to a destination resource.
-A reference to the destination resource (see
Destination subresource index.
The x-coordinate of the upper left corner of the destination region.
The y-coordinate of the upper left corner of the destination region. For a 1D subresource, this must be zero.
The z-coordinate of the upper left corner of the destination region. For a 1D or 2D subresource, this must be zero.
A reference to the source resource (see
Source subresource index.
A reference to a 3D box (see
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, CopySubresourceRegion doesn't perform a copy operation.
The source box must be within the size of the source resource. The destination offsets, (x, y, and z), allow the source box to be offset when writing into the destination resource; however, the dimensions of the source box and the offsets must be within the size of the resource. If you try and copy outside the destination resource or specify a source box that is larger than the source resource, the behavior of CopySubresourceRegion is undefined. If you created a device that supports the debug layer, the debug output reports an error on this invalid CopySubresourceRegion call. Invalid parameters to CopySubresourceRegion cause undefined behavior and might result in incorrect rendering, clipping, no copy, or even the removal of the rendering device.
If the resources are buffers, all coordinates are in bytes; if the resources are textures, all coordinates are in texels. D3D11CalcSubresource is a helper function for calculating subresource indexes.
CopySubresourceRegion performs the copy on the GPU (similar to a memcpy by the CPU). As a consequence, the source and destination resources:
CopySubresourceRegion only supports copy; it does not support any stretch, color key, or blend. CopySubresourceRegion can reinterpret the resource data between a few format types. For more info, see Format Conversion using Direct3D 10.1.
If your app needs to copy an entire resource, we recommend to use
CopySubresourceRegion is an asynchronous call, which may be added to the command-buffer queue, this attempts to remove pipeline stalls that may occur when copying data. For more information about pipeline stalls, see performance considerations.
Note??Applies only to feature level 9_x hardware If you useCopy the entire contents of the source resource to the destination resource using the GPU.
-A reference to the
A reference to the
This method is unusual in that it causes the GPU to perform the copy operation (similar to a memcpy by the CPU). As a result, it has a few restrictions designed for improving performance. For instance, the source and destination resources:
CopyResource only supports copy; it doesn't support any stretch, color key, or blend. CopyResource can reinterpret the resource data between a few format types. For more info, see Format Conversion using Direct3D 10.1.
You can't use an Immutable resource as a destination. You can use a depth-stencil resource as either a source or a destination provided that the feature level is
The method is an asynchronous call, which may be added to the command-buffer queue. This attempts to remove pipeline stalls that may occur when copying data. For more info, see performance considerations.
We recommend to use
The CPU copies data from memory to a subresource created in non-mappable memory.
-A reference to the destination resource (see
A zero-based index, that identifies the destination subresource. See D3D11CalcSubresource for more details.
A reference to a box that defines the portion of the destination subresource to copy the resource data into. Coordinates are in bytes for buffers and in texels for textures. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, UpdateSubresource doesn't perform an update operation.
A reference to the source data in memory.
The size of one row of the source data.
The size of one depth slice of source data.
For a shader-constant buffer; set pDstBox to
A resource cannot be used as a destination if:
When UpdateSubresource returns, the application is free to change or even free the data pointed to by pSrcData because the method has already copied/snapped away the original contents.
The performance of UpdateSubresource depends on whether or not there is contention for the destination resource. For example, contention for a vertex buffer resource occurs when the application executes a Draw call and later calls UpdateSubresource on the same vertex buffer before the Draw call is actually executed by the GPU.
To better understand the source row pitch and source depth pitch parameters, the following illustration shows a 3D volume texture.
Each block in this visual represents an element of data, and the size of each element is dependent on the resource's format. For example, if the resource format is
To calculate the source row pitch and source depth pitch for a given resource, use the following formulas:
In the case of this example 3D volume texture where the size of each element is 16 bytes, the formulas are as follows:
The following illustration shows the resource as it is laid out in memory.
For example, the following code snippet shows how to specify a destination region in a 2D texture. Assume the destination texture is 512x512 and the operation will copy the data pointed to by pData to [(120,100)..(200,220)] in the destination texture. Also assume that rowPitch has been initialized with the proper value (as explained above). front and back are set to 0 and 1 respectively, because by having front equal to back, the box is technically empty.
destRegion; - destRegion.left = 120; - destRegion.right = 200; - destRegion.top = 100; - destRegion.bottom = 220; - destRegion.front = 0; - destRegion.back = 1; pd3dDeviceContext->UpdateSubresource( pDestTexture, 0, &destRegion, pData, rowPitch, 0 ); -
The 1D case is similar. The following snippet shows how to specify a destination region in a 1D texture. Use the same assumptions as above, except that the texture is 512 in length.
destRegion; - destRegion.left = 120; - destRegion.right = 200; - destRegion.top = 0; - destRegion.bottom = 1; - destRegion.front = 0; - destRegion.back = 1; pd3dDeviceContext->UpdateSubresource( pDestTexture, 0, &destRegion, pData, rowPitch, 0 ); -
For info about various resource types and how UpdateSubresource might work with each resource type, see Introduction to a Resource in Direct3D 11.
-Copies data from a buffer holding variable length data.
-Pointer to
Offset from the start of pDstBuffer to write 32-bit UINT structure (vertex) count from pSrcView.
Pointer to an
Set all the elements in a render target to one value.
-Pointer to the render target.
A 4-component array that represents the color to fill the render target with.
Applications that wish to clear a render target to a specific integer value bit pattern should render a screen-aligned quad instead of using this method. The reason for this is because this method accepts as input a floating point value, which may not have the same bit pattern as the original integer.
Differences between Direct3D 9 and Direct3D 11/10: Unlike Direct3D 9, the full extent of the resource view is always cleared. Viewport and scissor settings are not applied. |
?
When using D3D_FEATURE_LEVEL_9_x, ClearRenderTargetView only clears the first array slice in the render target view. This can impact (for example) cube map rendering scenarios. Applications should create a render target view for each face or array slice, then clear each view individually.
-Clears an unordered access resource with bit-precise values.
-This API copies the lower ni bits from each array element i to the corresponding channel, where ni is the number of bits in the ith channel of the resource format (for example, R8G8B8_FLOAT has 8 bits for the first 3 channels). This works on any UAV with no format conversion. For a raw or structured buffer view, only the first array element value is used.
-Clears an unordered access resource with a float value.
-This API works on FLOAT, UNORM, and SNORM unordered access views (UAVs), with format conversion from FLOAT to *NORM where appropriate. On other UAVs, the operation is invalid and the call will not reach the driver.
-Clears the depth-stencil resource.
-Pointer to the depth stencil to be cleared.
Identify the type of data to clear (see
Clear the depth buffer with this value. This value will be clamped between 0 and 1.
Clear the stencil buffer with this value.
Differences between Direct3D 9 and Direct3D 11/10: Unlike Direct3D 9, the full extent of the resource view is always cleared. Viewport and scissor settings are not applied. |
?
-Generates mipmaps for the given shader resource.
-A reference to an
You can call GenerateMips on any shader-resource view to generate the lower mipmap levels for the shader resource. GenerateMips uses the largest mipmap level of the view to recursively generate the lower levels of the mip and stops with the smallest level that is specified by the view. If the base resource wasn't created with
Feature levels 9.1, 9.2, and 9.3 can't support automatic generation of mipmaps for 3D (volume) textures.
Video adapters that support feature level 9.1 and higher support generating mipmaps if you use any of these formats:
- - - - - - -
Video adapters that support feature level 9.2 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature level 9.1:
- - - - -
Video adapters that support feature level 9.3 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature levels 9.1 and 9.2:
- DXGI_FORMAT_B4G4R4A4 (optional) -
Video adapters that support feature level 10 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature levels 9.1, 9.2, and 9.3:
(optional) - - - - - - - - - - - - - - - (optional) -
For all other unsupported formats, GenerateMips will silently fail.
-Sets the minimum level-of-detail (LOD) for a resource.
-A reference to an
The level-of-detail, which ranges between 0 and the maximum number of mipmap levels of the resource. For example, the maximum number of mipmap levels of a 1D texture is specified in the MipLevels member of the
To use a resource with SetResourceMinLOD, you must set the
For Direct3D 10 and Direct3D 10.1, when sampling from a texture resource in a shader, the sampler can define a minimum LOD clamp to force sampling from less detailed mip levels. For Direct3D 11, this functionality is extended from the sampler to the entire resource. Therefore, the application can specify the highest-resolution mip level of a resource that is available for access. This restricts the set of mip levels that are required to be resident in GPU memory, thereby saving memory.
The set of mip levels resident per-resource in GPU memory can be specified by the user.
Minimum LOD affects all of the resident mip levels. Therefore, only the resident mip levels can be updated and read from.
All methods that access texture resources must adhere to minimum LOD clamps.
Empty-set accesses are handled as out-of-bounds cases.
-Gets the minimum level-of-detail (LOD).
-A reference to an
Returns the minimum LOD.
Copy a multisampled resource into a non-multisampled resource.
-Destination resource. Must be a created with the
A zero-based index, that identifies the destination subresource. Use D3D11CalcSubresource to calculate the index.
Source resource. Must be multisampled.
The source subresource of the source resource.
A
This API is most useful when re-using the resulting rendertarget of one render pass as an input to a second render pass.
The source and destination resources must be the same resource type and have the same dimensions. In addition, they must have compatible formats. There are three scenarios for this:
Scenario | Requirements |
---|---|
Source and destination are prestructured and typed | Both the source and destination must have identical formats and that format must be specified in the Format parameter. |
One resource is prestructured and typed and the other is prestructured and typeless | The typed resource must have a format that is compatible with the typeless resource (i.e. the typed resource is |
Source and destination are prestructured and typeless | Both the source and desintation must have the same typeless format (i.e. both must have For example, given the
|
?
-Queues commands from a command list onto a device.
- A reference to an
A Boolean flag that determines whether the target context state is saved prior to and restored after the execution of a command list. Use TRUE to indicate that the runtime needs to save and restore the state. Use
Use this method to play back a command list that was recorded by a deferred context on any thread.
A call to ExecuteCommandList of a command list from a deferred context onto the immediate context is required for the recorded commands to be executed on the graphics processing unit (GPU). A call to ExecuteCommandList of a command list from a deferred context onto another deferred context can be used to merge recorded lists. But to run the commands from the merged deferred command list on the GPU, you need to execute them on the immediate context.
This method performs some runtime validation related to queries. Queries that are begun in a device context cannot be manipulated indirectly by executing a command list (that is, Begin or End was invoked against the same query by the deferred context which generated the command list). If such a condition occurs, the ExecuteCommandList method does not execute the command list. However, the state of the device context is still maintained, as would be expected (
Windows?Phone?8: This API is supported.
-Get the rendering predicate state.
-Address of a boolean to fill with the predicate comparison value.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Restore all default settings.
-This method resets any device context to the default settings. This sets all input/output resource slots, shaders, input layouts, predications, scissor rectangles, depth-stencil state, rasterizer state, blend state, sampler state, and viewports to
For a scenario where you would like to clear a list of commands recorded so far, call
Sends queued-up commands in the command buffer to the graphics processing unit (GPU).
-Most applications don't need to call this method. If an application calls this method when not necessary, it incurs a performance penalty. Each call to Flush incurs a significant amount of overhead.
When Microsoft Direct3D state-setting, present, or draw commands are called by an application, those commands are queued into an internal command buffer. Flush sends those commands to the GPU for processing. Typically, the Direct3D runtime sends these commands to the GPU automatically whenever the runtime determines that they need to be sent, such as when the command buffer is full or when an application maps a resource. Flush sends the commands manually.
We recommend that you use Flush when the CPU waits for an arbitrary amount of time (such as when you call the Sleep function).
Because Flush operates asynchronously, it can return either before or after the GPU finishes executing the queued graphics commands. However, the graphics commands eventually always complete. You can call the
Microsoft Direct3D?11 defers the destruction of objects. Therefore, an application can't rely upon objects immediately being destroyed. By calling Flush, you destroy any objects whose destruction was deferred. If an application requires synchronous destruction of an object, we recommend that the application release all its references, call
Gets the type of device context.
-A member of
Gets the initialization flags associated with the current deferred context.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Create a command list and record graphics commands into it.
- A Boolean flag that determines whether the runtime saves deferred context state before it executes FinishCommandList and restores it afterwards. Use TRUE to indicate that the runtime needs to save and restore the state. Use
Upon completion of the method, the passed reference to an
Returns
Create a command list from a deferred context and record commands into it by calling FinishCommandList. Play back a command list with an immediate context by calling
Immediate context state is cleared before and after a command list is executed. A command list has no concept of inheritance. Each call to FinishCommandList will record only the state set since any previous call to FinishCommandList.
For example, the state of a device context is its render state or pipeline state. To retrieve device context state, an application can call
For more information about how to use FinishCommandList, see How to: Record a Command List.
Windows?Phone?8: This API is supported.
-The
Bind a single vertex buffer to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A reference to an array of
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A reference to an array of vertex buffers (see
Pointer to an array of stride values; one stride value for each buffer in the vertex-buffer array. Each stride is the size (in bytes) of the elements that are to be used from that vertex buffer.
Pointer to an array of offset values; one offset value for each buffer in the vertex-buffer array. Each offset is the number of bytes between the first element of a vertex buffer and the first element that will be used.
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get or sets a reference to the input-layout object that is bound to the input-assembler stage.
-For information about creating an input-layout object, see Creating the Input-Layout Object.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get or sets information about the primitive type, and data order that describes input data for the input assembler stage.
-Bind an input-layout object to the input-assembler stage.
-A reference to the input-layout object (see
Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage. To create an input-layout object, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-For info about creating vertex buffers, see How to: Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (that is, bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Bind an index buffer to the input-assembler stage.
- A reference to an
A
Offset (in bytes) from the start of the index buffer to the first index to use.
For information about creating index buffers, see How to: Create an Index Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Bind information about the primitive type, and data order that describes input data for the input assembler stage.
-The type of primitive and ordering of the primitive data (see D3D11_PRIMITIVE_TOPOLOGY).
Windows?Phone?8: This API is supported.
-Get a reference to the input-layout object that is bound to the input-assembler stage.
-A reference to the input-layout object (see
For information about creating an input-layout object, see Creating the Input-Layout Object.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex buffers bound to the input-assembler stage.
-The input slot of the first vertex buffer to get. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
The number of vertex buffers to get starting at the offset. The number of buffers (plus the starting slot) cannot exceed the total number of IA-stage input slots.
A reference to an array of vertex buffers returned by the method (see
Pointer to an array of stride values returned by the method; one stride value for each buffer in the vertex-buffer array. Each stride value is the size (in bytes) of the elements that are to be used from that vertex buffer.
Pointer to an array of offset values returned by the method; one offset value for each buffer in the vertex-buffer array. Each offset is the number of bytes between the first element of a vertex buffer and the first element that will be used.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get a reference to the index buffer that is bound to the input-assembler stage.
-A reference to an index buffer returned by the method (see
Specifies format of the data in the index buffer (see
Offset (in bytes) from the start of the index buffer, to the first index to use.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get information about the primitive type, and data order that describes input data for the input assembler stage.
-A reference to the type of primitive, and ordering of the primitive data (see D3D11_PRIMITIVE_TOPOLOGY).
The
Bind one or more render targets atomically and the depth-stencil buffer to the output-merger stage.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-Binds resources to the output-merger stage.
-Number of render-target views (ppRenderTargetViews) and depth-stencil view (ppDepthStencilView) to bind. If you set NumViews to D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL (0xffffffff), this method does not modify the currently bound render-target views (RTVs) and also does not modify depth-stencil view (DSV).
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessView. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.
Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumViews != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumViews == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessView.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Binds resources to the output-merger stage.
-Number of render-target views (ppRenderTargetViews) and depth-stencil view (ppDepthStencilView) to bind. If you set NumViews to D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL (0xffffffff), this method does not modify the currently bound render-target views (RTVs) and also does not modify depth-stencil view (DSV).
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessView. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.
Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumViews != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumViews == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessView.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Bind one or more render targets atomically and the depth-stencil buffer to the output-merger stage.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-Binds resources to the output-merger stage.
- Number of render targets to bind (ranges between 0 and
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows?8, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessViews. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows?8, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.?Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumRTVs != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumRTVs == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessViews.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Windows?Phone?8: This API is supported.
-Set the blend state of the output-merger stage.
-Pointer to a blend-state interface (see
Array of blend factors, one for each RGBA component. The blend factors modulate values for the pixel shader, render target, or both. If you created the blend-state object with
32-bit sample coverage. The default value is 0xffffffff. See remarks.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
To create a blend-state interface, call
Passing in
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
A sample mask determines which samples get updated in all the active render targets. The mapping of bits in a sample mask to samples in a multisample render target is the responsibility of an individual application. A sample mask is always applied; it is independent of whether multisampling is enabled, and does not depend on whether an application uses multisample render targets.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the depth-stencil state of the output-merger stage.
-Pointer to a depth-stencil state interface (see
Reference value to perform against when doing a depth-stencil test. See remarks.
To create a depth-stencil state interface, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get references to the resources bound to the output-merger stage.
-Number of render targets to retrieve.
Pointer to an array of
Pointer to a
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get references to the resources bound to the output-merger stage.
-The number of render-target views to retrieve.
Pointer to an array of
Pointer to a
Index into a zero-based array to begin retrieving unordered-access views (ranges from 0 to
Number of unordered-access views to return in ppUnorderedAccessViews. This number ranges from 0 to
Pointer to an array of
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-Set the blend state of the output-merger stage.
-Pointer to a blend-state interface (see
Array of blend factors, one for each RGBA component. The blend factors modulate values for the pixel shader, render target, or both. If you created the blend-state object with
32-bit sample coverage. The default value is 0xffffffff. See remarks.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
To create a blend-state interface, call
Passing in
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
A sample mask determines which samples get updated in all the active render targets. The mapping of bits in a sample mask to samples in a multisample render target is the responsibility of an individual application. A sample mask is always applied; it is independent of whether multisampling is enabled, and does not depend on whether an application uses multisample render targets.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Gets the depth-stencil state of the output-merger stage.
- Address of a reference to a depth-stencil state interface (see
Pointer to the stencil reference value used in the depth-stencil test.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-The
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
-All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
-All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
Gets or sets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
- If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Set the rasterizer state for the rasterizer stage of the pipeline.
-To create a rasterizer state interface, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of viewports to the rasterizer stage of the pipeline.
-Number of viewports to bind.
An array of
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
Note??Even though you specify float values to the members of theBind an array of scissor rectangles to the rasterizer stage.
-Number of scissor rectangles to bind.
An array of scissor rectangles (see D3D11_RECT).
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
Windows?Phone?8: This API is supported.
-Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
- If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Gets the array of viewports bound to the rasterizer stage.
- A reference to a variable that, on input, specifies the number of viewports (ranges from 0 to D3D11_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE) in the pViewports array; on output, the variable contains the actual number of viewports that are bound to the rasterizer stage. If pViewports is
An array of
Windows?Phone?8: This API is supported.
-Get the array of scissor rectangles bound to the rasterizer stage.
-The number of scissor rectangles (ranges between 0 and D3D11_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE) bound; set pRects to
An array of scissor rectangles (see D3D11_RECT). If NumRects is greater than the number of scissor rects currently bound, then unused members of the array will contain 0.
The
Set the target output buffers for the stream-output stage of the pipeline.
-The number of buffer to bind to the device. A maximum of four output buffers can be set. If less than four are defined by the call, the remaining buffer slots are set to
The array of output buffers (see
Array of offsets to the output buffers from ppSOTargets, one offset for each buffer. The offset values must be in bytes.
An offset of -1 will cause the stream output buffer to be appended, continuing after the last location written to the buffer in a previous stream output pass.
Calling this method using a buffer that is currently bound for writing will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set the target output buffers for the stream-output stage of the pipeline.
- The number of buffer to bind to the device. A maximum of four output buffers can be set. If less than four are defined by the call, the remaining buffer slots are set to
The array of output buffers (see
Array of offsets to the output buffers from ppSOTargets, one offset for each buffer. The offset values must be in bytes.
An offset of -1 will cause the stream output buffer to be appended, continuing after the last location written to the buffer in a previous stream output pass.
Calling this method using a buffer that is currently bound for writing will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Get the target output buffers for the stream-output stage of the pipeline.
-Number of buffers to get.
An array of output buffers (see
A maximum of four output buffers can be retrieved.
The offsets to the output buffers pointed to in the returned ppSOTargets array may be assumed to be -1 (append), as defined for use in
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-The device context interface represents a device context; it is used to render commands.
Copies a region from a source resource to a destination resource.
-A reference to the destination resource.
Destination subresource index.
The x-coordinate of the upper-left corner of the destination region.
The y-coordinate of the upper-left corner of the destination region. For a 1D subresource, this must be zero.
The z-coordinate of the upper-left corner of the destination region. For a 1D or 2D subresource, this must be zero.
A reference to the source resource.
Source subresource index.
A reference to a 3D box that defines the region of the source subresource that CopySubresourceRegion1 can copy. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, CopySubresourceRegion1 doesn't perform a copy operation.
A
If the display driver supports overlapping, the source and destination subresources can be identical, and the source and destination regions can overlap each other. For existing display drivers that don?t support overlapping, the runtime drops calls with identical source and destination subresources, regardless of whether the regions overlap. To determine whether the display driver supports overlapping, check the CopyWithOverlap member of
The CPU copies data from memory to a subresource created in non-mappable memory.
-A reference to the destination resource.
A zero-based index that identifies the destination subresource. See D3D11CalcSubresource for more details.
A reference to a box that defines the portion of the destination subresource to copy the resource data into. Coordinates are in bytes for buffers and in texels for textures. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, UpdateSubresource1 doesn't perform an update operation.
A reference to the source data in memory.
The size of one row of the source data.
The size of one depth slice of source data.
A
If you call UpdateSubresource1 to update a constant buffer, pass any region, and the driver has not been implemented to Windows?8, the runtime drops the call (except feature level 9.1, 9.2, and 9.3 where the runtime emulates support). The runtime also drops the call if you update a constant buffer with a partial region whose extent is not aligned to 16-byte granularity (16 bytes being a full constant). When the runtime drops the call, the runtime doesn't call the corresponding device driver interface (DDI).
When you record a call to UpdateSubresource with an offset pDstBox in a software command list, the offset in pDstBox is incorrectly applied to pSrcData when you play back the command list. The new-for-Windows?8UpdateSubresource1 fixes this issue. In a call to UpdateSubresource1, pDstBox does not affect pSrcData.
For info about various resource types and how UpdateSubresource1 might work with each resource type, see Introduction to a Resource in Direct3D 11.
Note??Applies only to feature level 9_x hardware If you use UpdateSubresource1 orDiscards a resource from the device context.
-A reference to the
DiscardResource informs the graphics processing unit (GPU) that the existing content in the resource that pResource points to is no longer needed.
-Discards a resource view from the device context.
-A reference to the
DiscardView informs the graphics processing unit (GPU) that the existing content in the resource view that pResourceView points to is no longer needed. The view can be an SRV, RTV, UAV, or DSV. DiscardView is a variation on the DiscardResource method. DiscardView allows you to discard a subset of a resource that is in a view (such as a single miplevel). More importantly, DiscardView provides a convenience because often views are what are being bound and unbound at the pipeline. Some pipeline bindings do not have views, such as stream output. In that situation, DiscardResource can do the job for any resource.
-Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Gets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the hull-shader stage uses.
-If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the domain-shader stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the pixel shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the compute-shader stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Activates the given context state object and changes the current device behavior to Direct3D?11, Direct3D?10.1, or Direct3D?10.
-A reference to the
A reference to a variable that receives a reference to the
SwapDeviceContextState changes device behavior. This device behavior depends on the emulated interface that you passed to the EmulatedInterface parameter of the
SwapDeviceContextState is not supported on a deferred context.
SwapDeviceContextState disables the incompatible device interfaces ID3D10Device, ID3D10Device1, __uuidof(
or __uuidof(
turns off most of the Direct3D?10 device interfaces. A context state object that is created with __uuidof(ID3D10Device1)
or __uuidof(ID3D10Device)
turns off most of the
SwapDeviceContextState activates the context state object specified by pState. This means that the device behaviors that are associated with the context state object's feature level and compatible interface are activated on the Direct3D device until the next call to SwapDeviceContextState. In addition, any state that was saved when this context state object was last active is now reactivated, so that the previous state is replaced.
SwapDeviceContextState sets ppPreviousState to the most recently activated context state object. The object allows the caller to save and then later restore the previous device state. This behavior is useful in a plug-in architecture such as Direct2D that shares a Direct3D device with its plug-ins. A Direct2D interface can use context state objects to save and restore the application's state.
If the caller did not previously call the
The feature level that is specified by the application, and that is chosen by the context state object from the acceptable list that the application supplies to
The feature level of the context state object controls the functionality available from the immediate context. However, to maintain the free-threaded contract of the Direct3D?11 device methods?the resource-creation methods in particular?the upper-bound feature level of all created context state objects controls the set of resources that the device creates.
Because the context state object interface is published by the immediate context, the interface requires the same threading model as the immediate context. Specifically, SwapDeviceContextState is single-threaded with respect to the other immediate context methods and with respect to the equivalent methods of ID3D10Device.
Crucially, because only one of the Direct3D?10 or Direct3D?11 ref-count behaviors can be available at a time, one of the Direct3D?10 and Direct3D?11 interfaces must break its ref-count contract. To avoid this situation, the activation of a context state object turns off the incompatible version interface. Also, if you call a method of an incompatible version interface, the call silently fails if the method has return type void, returns an
When you switch from Direct3D?11 mode to either Direct3D?10 mode or Direct3D?10.1 mode, the binding behavior of the device changes. Specifically, the final release of a resource induces unbind in Direct3D?10 mode or Direct3D?10.1 mode. During final release an application releases all of the resource's references, including indirect references such as the linkage from view to resource, and the linkage from context state object to any of the context state object's bound resources. Any bound resource to which the application has no reference is unbound and destroyed, in order to maintain the Direct3D?10 behavior.
SwapDeviceContextState does not affect any state that
Command lists that are generated by deferred contexts do not hold a reference to context state objects and are not affected by future updates to context state objects.
No asynchronous objects are affected by SwapDeviceContextState. For example, if a query is active before a call to SwapDeviceContextState, it is still active after the call.
-Sets all the elements in a resource view to one value.
-A reference to the
A 4-component array that represents the color to use to clear the resource view.
An array of D3D11_RECT structures for the rectangles in the resource view to clear. If
Number of rectangles in the array that the pRect parameter specifies.
ClearView works only on render-target views (RTVs), depth/stencil views (DSVs) on depth-only resources (resources with no stencil component), unordered-access views (UAVs), or any video view of a Texture2D surface. The runtime drops invalid calls. Empty rectangles in the pRect array are a no-op. A rectangle is empty if the top value equals the bottom value or the left value equals the right value.
ClearView doesn?t support 3D textures.
ClearView applies the same color value to all array slices in a view; all rectangles in the pRect array correspond to each array slice. The pRect array of rectangles is a set of areas to clear on a single surface. If the view is an array, ClearView clears all the rectangles on each array slice individually.
When you apply rectangles to buffers, set the top value to 0 and the bottom value to 1 and set the left value and right value to describe the extent within the buffer. When the top value equals the bottom value or the left value equals the right value, the rectangle is empty and a no-op is achieved.
The driver converts and clamps color values to the destination format as appropriate per Direct3D conversion rules. For example, if the format of the view is
If the format is integer, such as
Here are the color mappings:
For video views with YUV or YCbBr formats, ClearView doesn't convert color values. In situations where the format name doesn?t indicate _UNORM, _UINT, and so on, ClearView assumes _UINT. Therefore, 235.0f maps to 235 (rounds to zero, out of range/INF values clamp to target range, and NaN to 0).
-Discards the specified elements in a resource view from the device context.
- A reference to the
An array of D3D11_RECT structures for the rectangles in the resource view to discard. If
Number of rectangles in the array that the pRects parameter specifies.
DiscardView1 informs the graphics processing unit (GPU) that the existing content in the specified elements in the resource view that pResourceView points to is no longer needed. The view can be an SRV, RTV, UAV, or DSV. DiscardView1 is a variation on the DiscardResource method. DiscardView1 allows you to discard elements of a subset of a resource that is in a view (such as elements of a single miplevel). More importantly, DiscardView1 provides a convenience because often views are what are being bound and unbound at the pipeline. Some pipeline bindings do not have views, such as stream output. In that situation, DiscardResource can do the job for any resource.
-The device context interface represents a device context; it is used to render commands.
Allows apps to determine when either a capture or profiling request is enabled.
-Returns TRUE if the capture tool is present and capturing or the app is being profiled such that SetMarkerInt or BeginEventInt will be logged to ETW. Otherwise, it returns
If apps detect that capture is being performed, they can prevent the Direct3D debugging tools, such as Microsoft Visual Studio?2013, from capturing them. The purpose of the
Updates mappings of tile locations in tiled resources to memory locations in a tile pool.
-A reference to the tiled resource.
The number of tiled resource regions.
An array of
An array of
A reference to the tile pool.
The number of tile-pool ranges.
An array of
An array of offsets into the tile pool. These are 0-based tile offsets, counting in tiles (not bytes).
An array of tiles.
An array of values that specify the number of tiles in each tile-pool range. The NumRanges parameter specifies the number of values in the array.
A combination of D3D11_TILE_MAPPING_FLAGS values that are combined by using a bitwise OR operation.
Returns
The debug layer will emit an error.
If out of memory occurs when this is called in a commandlist and the commandlist is being executed, the device will be removed. Apps can avoid this situation by only doing update calls that change existing mappings from tiled resources within commandlists (so drivers will not have to allocate page table memory, only change the mapping).
In a single call to UpdateTileMappings, you can map one or more ranges of resource tiles to one or more ranges of tile-pool tiles.
You can organize the parameters of UpdateTileMappings in these ways to perform an update:
If pTiledResourceRegionStartCoordinates isn't
The updates are applied from first region to last; so, if regions overlap in a single call, the updates later in the list overwrite the areas that overlap with previous updates.
NumRanges specifies the number of tile ranges, where the total tiles identified across all ranges must match the total number of tiles in the tile regions from the previously described tiled resource. Mappings are defined by iterating through the tiles in the tile regions in sequential order - x then y then z order for box regions - while walking through the set of tile ranges in sequential order. The breakdown of tile regions doesn't have to line up with the breakdown of tile ranges, but the total number of tiles on both sides must be equal so that each tiled resource tile specified has a mapping specified.
pRangeFlags, pTilePoolStartOffsets, and pRangeTileCounts are all arrays, of size NumRanges, that describe the tile ranges. If pRangeFlags is
If tile mappings have changed on a tiled resource that the app will render via RenderTargetView or DepthStencilView, the app must clear, by using the fixed function Clear APIs, the tiles that have changed within the area being rendered (mapped or not). If an app doesn't clear in these situations, the app receives undefined values when it reads from the tiled resource. -
Note??In Direct3D 11.2, hardware can now support ClearView on depth-only formats. For more info, seeIf an app needs to preserve existing memory contents of areas in a tiled resource where mappings have changed, the app can first save the contents where tile mappings have changed, by copying them to a temporary surface, for example using CopyTiles, issuing the required Clear, and then copying the contents back. -
Suppose a tile is mapped into multiple tiled resources at the same time and tile contents are manipulated by any means (render, copy, and so on) via one of the tiled resources. Then, if the same tile is to be rendered via any other tiled resource, the tile must be cleared first as previously described. -
For more info about tiled resources, see Tiled resources.
Here are some examples of common UpdateTileMappings cases:
-Copies mappings from a source tiled resource to a destination tiled resource.
-A reference to the destination tiled resource.
A reference to a
A reference to the source tiled resource.
A reference to a
A reference to a
A combination of D3D11_TILE_MAPPING_FLAGS values that are combined by using a bitwise OR operation. The only valid value is
Returns
The dest and the source regions must each entirely fit in their resource or behavior is undefined (debug layer will emit an error).
If out of memory occurs when this is called in a commandlist and the commandlist is being executed, the device will be removed. Applications can avoid this situation by only doing update calls that change existing mappings from Tiled Resources within commandlists (so drivers will not have to allocate page table memory, only change the mapping).
CopyTileMappings helps with tasks such as shifting mappings around within and across tiled resources, for example, scrolling tiles. The source and destination regions can overlap; the result of the copy in this situation is as if the source was saved to a temp location and then from there written to the destination.
For more info about tiled resources, see Tiled resources.
-Copies tiles from buffer to tiled resource or vice versa.
-A reference to a tiled resource.
A reference to a
A reference to a
A reference to an
The offset in bytes into the buffer at pBuffer to start the operation.
A combination of
CopyTiles drops write operations to unmapped areas and handles read operations from unmapped areas (except on Tier_1 tiled resources, where reading and writing unmapped areas is invalid).
If a copy operation involves writing to the same memory location multiple times because multiple locations in the destination resource are mapped to the same tile memory, the resulting write operations to multi-mapped tiles are non-deterministic and non-repeatable; that is, accesses to the tile memory happen in whatever order the hardware happens to execute the copy operation.
The tiles involved in the copy operation can't include tiles that contain packed mipmaps or results of the copy operation are undefined. To transfer data to and from mipmaps that the hardware packs into one tile, you must use the standard (that is, non-tile specific) copy and update APIs (like
The memory layout of the tiles in the non-tiled buffer resource side of the copy operation is linear in memory within 64 KB tiles, which the hardware and driver swizzle and deswizzle per tile as appropriate when they transfer to and from a tiled resource. For multisample antialiasing (MSAA) surfaces, the hardware and driver traverse each pixel's samples in sample-index order before they move to the next pixel. For tiles that are partially filled on the right side (for a surface that has a width not a multiple of tile width in pixels), the pitch and stride to move down a row is the full size in bytes of the number pixels that would fit across the tile if the tile was full. So, there can be a gap between each row of pixels in memory. Mipmaps that are smaller than a tile are not packed together in the linear layout, which might seem to be a waste of memory space, but as mentioned you can't use CopyTiles or
For more info about tiled resources, see Tiled resources.
-Updates tiles by copying from app memory to the tiled resource.
-A reference to a tiled resource to update.
A reference to a
A reference to a
A reference to memory that contains the source tile data that UpdateTiles uses to update the tiled resource.
A combination of
UpdateTiles drops write operations to unmapped areas (except on Tier_1 tiled resources, where writing to unmapped areas is invalid).
If a copy operation involves writing to the same memory location multiple times because multiple locations in the destination resource are mapped to the same tile memory, the resulting write operations to multi-mapped tiles are non-deterministic and non-repeatable; that is, accesses to the tile memory happen in whatever order the hardware happens to execute the copy operation.
The tiles involved in the copy operation can't include tiles that contain packed mipmaps or results of the copy operation are undefined. To transfer data to and from mipmaps that the hardware packs into one tile, you must use the standard (that is, non-tile specific) copy and update APIs (like
The memory layout of the data on the source side of the copy operation is linear in memory within 64 KB tiles, which the hardware and driver swizzle and deswizzle per tile as appropriate when they transfer to and from a tiled resource. For multisample antialiasing (MSAA) surfaces, the hardware and driver traverse each pixel's samples in sample-index order before they move to the next pixel. For tiles that are partially filled on the right side (for a surface that has a width not a multiple of tile width in pixels), the pitch and stride to move down a row is the full size in bytes of the number pixels that would fit across the tile if the tile was full. So, there can be a gap between each row of pixels in memory. Mipmaps that are smaller than a tile are not packed together in the linear layout, which might seem to be a waste of memory space, but as mentioned you can't use
For more info about tiled resources, see Tiled resources.
-Resizes a tile pool.
-A reference to an
The new size in bytes of the tile pool. The size must be a multiple of 64 KB or 0.
Returns
For E_INVALIDARG or E_OUTOFMEMORY, the existing tile pool remains unchanged, which includes existing mappings.
ResizeTilePool increases or decreases the size of the tile pool depending on whether the app needs more or less working set for the tiled resources that are mapped into it. An app can allocate additional tile pools for new tiled resources, but if any single tiled resource needs more space than initially available in its tile pool, the app can increase the size of the resource's tile pool. A tiled resource can't have mappings into multiple tile pools simultaneously.
When you increase the size of a tile pool, additional tiles are added to the end of the tile pool via one or more new allocations by the driver; your app can't detect the breakdown into the new allocations. Existing memory in the tile pool is left untouched, and existing tiled resource mappings into that memory remain intact.
When you decrease the size of a tile pool, tiles are removed from the end (this is allowed even below the initial allocation size, down to 0). This means that new mappings can't be made past the new size. But, existing mappings past the end of the new size remain intact and useable. The memory is kept active as long as mappings to any part of the allocations that are being used for the tile pool memory remains. If after decreasing, some memory has been kept active because tile mappings are pointing to it and the tile pool is increased again (by any amount), the existing memory is reused first before any additional allocations occur to service the size of the increase.
To be able to save memory, an app has to not only decrease a tile pool but also remove and remap existing mappings past the end of the new smaller tile pool size.
The act of decreasing (and removing mappings) doesn't necessarily produce immediate memory savings. Freeing of memory depends on how granular the driver's underlying allocations for the tile pool are. When a decrease in the size of a tile pool happens to be enough to make a driver allocation unused, the driver can free the allocation. If a tile pool was increased and if you then decrease to previous sizes (and remove and remap tile mappings correspondingly), you will most likely yield memory savings. But, this scenario isn't guaranteed in the case that the sizes don't exactly align with the underlying allocation sizes chosen by the driver.
For more info about tiled resources, see Tiled resources.
-Specifies a data access ordering constraint between multiple tiled resources. For more info about this constraint, see Remarks.
-A reference to an
A reference to an
Apps can use tiled resources to reuse tiles in different resources. But, a device and driver might not be able to determine whether some memory in a tile pool that was just rendered to is now being used for reading. -
For example, an app can render to some tiles in a tile pool with one tiled resource but then read from the same tiles by using a different tiled resource. These tiled-resource operations are different from using one resource and then just switching from writing with
When an app transitions from accessing (reading or writing) some location in a tile pool with one resource to accessing the same memory (read or write) via another tiled resource (with mappings to the same memory), the app must call TiledResourceBarrier after the first use of the resource and before the second. The parameters are the pTiledResourceOrViewAccessBeforeBarrier for accesses before the barrier (via rendering, copying), and the pTiledResourceOrViewAccessAfterBarrier for accesses after the barrier by using the same tile pool memory. If the resources are identical, the app doesn't need to call TiledResourceBarrier because this kind of hazard is already tracked and handled. -
The barrier call informs the driver that operations issued to the resource before the call must complete before any accesses that occur after the call via a different tiled resource that shares the same memory. -
Either or both of the parameters (before or after the barrier) can be
An app can pass a view reference, a resource, or
For more info about tiled resources, see Tiled resources.
-Allows apps to determine when either a capture or profiling request is enabled.
-Returns TRUE if capture or profiling is enabled and
Returns TRUE if the capture tool is present and capturing or the app is being profiled such that SetMarkerInt or BeginEventInt will be logged to ETW. Otherwise, it returns
If apps detect that capture is being performed, they can prevent the Direct3D debugging tools, such as Microsoft Visual Studio?2013, from capturing them. The purpose of the
Allows applications to annotate graphics commands.
-An optional string that will be logged to ETW when ETW logging is active. If ?#d? appears in the string, it will be replaced by the value of the Data parameter similar to the way printf works.
A signed data value that will be logged to ETW when ETW logging is active.
SetMarkerInt allows applications to annotate graphics commands, in order to provide more context to what the GPU is executing. When ETW logging or a support tool is enabled, an additional marker is correlated between the CPU and GPU timelines. The pLabel and Data value are logged to ETW. When the appropriate ETW logging is not enabled, this method does nothing.
-Allows applications to annotate the beginning of a range of graphics commands.
-An optional string that will be logged to ETW when ETW logging is active. If ?#d? appears in the string, it will be replaced by the value of the Data parameter similar to the way printf works.
A signed data value that will be logged to ETW when ETW logging is active.
BeginEventInt allows applications to annotate the beginning of a range of graphics commands, in order to provide more context to what the GPU is executing. When ETW logging (or a supported tool) is enabled, an additional marker is correlated between the CPU and GPU timelines. The pLabel and Data value are logged to ETW. When the appropriate ETW logging is not enabled, this method does nothing.
-Allows applications to annotate the end of a range of graphics commands.
-EndEvent allows applications to annotate the end of a range of graphics commands, in order to provide more context to what the GPU is executing. When the appropriate ETW logging is not enabled, this method does nothing. When ETW logging is enabled, an additional marker is correlated between the CPU and GPU timelines.
- The device context interface represents a device context; it is used to render commands.
Gets or sets whether hardware protection is enabled.
-Sends queued-up commands in the command buffer to the graphics processing unit (GPU), with a specified context type and an optional event handle to create an event query.
- A
An optional event handle. When specified, this method creates an event query.
Flush1 operates asynchronously, therefore it can return either before or after the GPU finishes executing the queued graphics commands, which will eventually complete. To create an event query, you can call
Flush1 has parameters. For more information, see
Sets the hardware protection state.
-Specifies whether to enable hardware protection.
Gets whether hardware protection is enabled.
- After this method returns, points to a
A debug interface controls debug settings, validates pipeline state and can only be used if the debug layer is turned on.
- This interface is obtained by querying it from the
For more information about the debug layer, see Debug Layer.
Windows?Phone?8: This API is supported.
-Get or sets the number of milliseconds to sleep after
Value is set with
Get or sets the swap chain that the runtime will use for automatically calling
The swap chain retrieved by this method will only be used if
Set a bit field of flags that will turn debug features on and off.
-A combination of feature-mask flags that are combined by using a bitwise OR operation. If a flag is present, that feature will be set to on, otherwise the feature will be set to off. For descriptions of the feature-mask flags, see Remarks.
This method returns one of the Direct3D 11 Return Codes.
Setting one of the following feature-mask flags will cause a rendering-operation method (listed below) to do some extra task when called.
Application will wait for the GPU to finish processing the rendering operation before continuing. | |
Runtime will additionally call | |
Runtime will call |
?
These feature-mask flags apply to the following rendering-operation methods:
By setting one of the following feature-mask flags, you can control the behavior of the
When you call | |
When you call |
?
The behavior of the
The following flag is supported by the Direct3D 11.1 runtime.
Disables the following default debugging behavior. |
?
When the debug layer is enabled, it performs certain actions to reveal application problems. By setting the
The following flag is supported by the Direct3D 11.2 runtime.
Disables the following default debugging behavior. |
?
By default (that is, without
If
Get a bitfield of flags that indicates which debug features are on or off.
-Mask of feature-mask flags bitwise ORed together. If a flag is present, then that feature will be set to on, otherwise the feature will be set to off. See
Set the number of milliseconds to sleep after
This method returns one of the following Direct3D 11 Return Codes.
The application will only sleep if
Get the number of milliseconds to sleep after
Number of milliseconds to sleep after Present is called.
Value is set with
Sets a swap chain that the runtime will use for automatically calling
This method returns one of the following Direct3D 11 Return Codes.
The swap chain set by this method will only be used if
Get the swap chain that the runtime will use for automatically calling
This method returns one of the following Direct3D 11 Return Codes.
The swap chain retrieved by this method will only be used if
Check to see if the draw pipeline state is valid.
-A reference to the
This method returns one of the following Direct3D 11 Return Codes.
Use validate prior to calling a draw method (for example,
Report information about a device object's lifetime.
-A value from the
This method returns one of the following Direct3D 11 Return Codes.
ReportLiveDeviceObjects uses the value in Flags to determine the amount of information to report about a device object's lifetime.
-Verifies whether the dispatch pipeline state is valid.
-A reference to the
This method returns one of the return codes described in the topic Direct3D 11 Return Codes.
Use this method before you call a dispatch method (for example,
A domain-shader interface manages an executable program (a domain shader) that controls the domain-shader stage.
-The domain-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a domain-shader interface, call
This interface is defined in D3D11.h.
-The device context interface represents a device context; it is used to render commands.
Optional flags that control the behavior of
Specifies the type of Microsoft Direct3D authenticated channel.
-Direct3D?11 channel. This channel provides communication with the Direct3D runtime.
Software driver channel. This channel provides communication with a driver that implements content protection mechanisms in software.
Hardware driver channel. This channel provides communication with a driver that implements content protection mechanisms in the GPU hardware.
Specifies the type of process that is identified in the
Identifies how to bind a resource to the pipeline.
-In general, binding flags can be combined using a logical OR (except the constant-buffer flag); however, you should use a single flag to allow the device to optimize the resource usage.
This enumeration is used by a:
A shader-resource buffer is NOT a constant buffer; rather, it is a texture or buffer resource that is bound to a shader, that contains texture or buffer data (it is not limited to a single element type in the buffer). A shader-resource buffer is created with the
Bind a buffer as a vertex buffer to the input-assembler stage.
Bind a buffer as an index buffer to the input-assembler stage.
Bind a buffer as a constant buffer to a shader stage; this flag may NOT be combined with any other bind flag.
Bind a buffer or texture to a shader stage; this flag cannot be used with the
Bind an output buffer for the stream-output stage.
Bind a texture as a render target for the output-merger stage.
Bind a texture as a depth-stencil target for the output-merger stage.
Bind an unordered access resource.
Set this flag to indicate that a 2D texture is used to receive output from the decoder API. The common way to create resources for a decoder output is by calling the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that a 2D texture is used to receive input from the video encoder API. The common way to create resources for a video encoder is by calling the
Direct3D 11:??This value is not supported until Direct3D 11.1.
RGB or alpha blending operation.
-The runtime implements RGB blending and alpha blending separately. Therefore, blend state requires separate blend operations for RGB data and alpha data. These blend operations are specified in a blend description. The two sources ?source 1 and source 2? are shown in the blending block diagram.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
-Add source 1 and source 2.
Subtract source 1 from source 2.
Subtract source 2 from source 1.
Find the minimum of source 1 and source 2.
Find the maximum of source 1 and source 2.
Blend factors, which modulate values for the pixel shader and render target.
-Blend operations are specified in a blend description.
-The blend factor is (0, 0, 0, 0). No pre-blend operation.
The blend factor is (1, 1, 1, 1). No pre-blend operation.
The blend factor is (R?, G?, B?, A?), that is color data (RGB) from a pixel shader. No pre-blend operation.
The blend factor is (1 - R?, 1 - G?, 1 - B?, 1 - A?), that is color data (RGB) from a pixel shader. The pre-blend operation inverts the data, generating 1 - RGB.
The blend factor is (A?, A?, A?, A?), that is alpha data (A) from a pixel shader. No pre-blend operation.
The blend factor is ( 1 - A?, 1 - A?, 1 - A?, 1 - A?), that is alpha data (A) from a pixel shader. The pre-blend operation inverts the data, generating 1 - A.
The blend factor is (Ad Ad Ad Ad), that is alpha data from a render target. No pre-blend operation.
The blend factor is (1 - Ad 1 - Ad 1 - Ad 1 - Ad), that is alpha data from a render target. The pre-blend operation inverts the data, generating 1 - A.
The blend factor is (Rd, Gd, Bd, Ad), that is color data from a render target. No pre-blend operation.
The blend factor is (1 - Rd, 1 - Gd, 1 - Bd, 1 - Ad), that is color data from a render target. The pre-blend operation inverts the data, generating 1 - RGB.
The blend factor is (f, f, f, 1); where f = min(A?, 1 - Ad). The pre-blend operation clamps the data to 1 or less. -
The blend factor is the blend factor set with
The blend factor is the blend factor set with
The blend factor is data sources both as color data output by a pixel shader. There is no pre-blend operation. This blend factor supports dual-source color blending.
The blend factor is data sources both as color data output by a pixel shader. The pre-blend operation inverts the data, generating 1 - RGB. This blend factor supports dual-source color blending.
The blend factor is data sources as alpha data output by a pixel shader. There is no pre-blend operation. This blend factor supports dual-source color blending.
The blend factor is data sources as alpha data output by a pixel shader. The pre-blend operation inverts the data, generating 1 - A. This blend factor supports dual-source color blending.
Specifies the type of I/O bus that is used by the graphics adapter.
-Indicates a type of bus other than the types listed here. -
PCI bus. -
PCI-X bus. -
PCI Express bus. -
Accelerated Graphics Port (AGP) bus. -
The implementation for the graphics adapter is in a motherboard chipset's north bridge. This flag implies that data never goes over an expansion bus (such as PCI or AGP) when it is transferred from main memory to the graphics adapter.
Indicates that the graphics adapter is connected to a motherboard chipset's north bridge by tracks on the motherboard, and all of the graphics adapter's chips are soldered to the motherboard. This flag implies that data never goes over an expansion bus (such as PCI or AGP) when it is transferred from main memory to the graphics adapter.
The graphics adapter is connected to a motherboard chipset's north bridge by tracks on the motherboard, and all of the graphics adapter's chips are connected through sockets to the motherboard. -
The graphics adapter is connected to the motherboard through a daughterboard connector. -
The graphics adapter is connected to the motherboard through a daughterboard connector, and the graphics adapter is inside an enclosure that is not user accessible. -
One of the D3D11_BUS_IMPL_MODIFIER_Xxx flags is set. -
Identifies how to check multisample quality levels.
-Indicates to check the multisample quality levels of a tiled resource.
Identify which components of each pixel of a render target are writable during blending.
-These flags can be combined with a bitwise OR.
-Allow data to be stored in the red component.
Allow data to be stored in the green component.
Allow data to be stored in the blue component.
Allow data to be stored in the alpha component.
Allow data to be stored in all components.
Comparison options.
-A comparison option determines whether how the runtime compares source (new) data against destination (existing) data before storing the new data. The comparison option is declared in a description before an object is created. The API allows you to set a comparison option for a depth-stencil buffer (see
Never pass the comparison.
If the source data is less than the destination data, the comparison passes.
If the source data is equal to the destination data, the comparison passes.
If the source data is less than or equal to the destination data, the comparison passes.
If the source data is greater than the destination data, the comparison passes.
If the source data is not equal to the destination data, the comparison passes.
If the source data is greater than or equal to the destination data, the comparison passes.
Always pass the comparison.
Unordered resource support options for a compute shader resource (see
Identifies whether conservative rasterization is on or off.
-Conservative rasterization is off.
Conservative rasterization is on.
Specifies if the hardware and driver support conservative rasterization and at what tier level.
-Conservative rasterization isn't supported.
Tier_1 conservative rasterization is supported.
Tier_2 conservative rasterization is supported.
Tier_3 conservative rasterization is supported.
Contains flags that describe content-protection capabilities.
-The content protection is implemented in software by the driver.
The content protection is implemented in hardware by the GPU. -
Content protection is always applied to a protected surface, regardless of whether the application explicitly enables protection.
The driver can use partially encrypted buffers. If this capability is not present, the entire buffer must be either encrypted or clear.
The driver can encrypt data using a separate content key that is encrypted using the session key.
The driver can refresh the session key without renegotiating the key.
The driver can read back encrypted data from a protected surface. For more information, see
The driver requires a separate key to read encrypted data from a protected surface.
If the encryption type is D3DCRYPTOTYPE_AES128_CTR, the application must use a sequential count in the
The driver supports encrypted slice data, but does not support any other encrypted data in the compressed buffer. The caller should not encrypt any data within the buffer other than the slice data.
Note??The driver should only report this flag for the specific profiles that have this limitation. ?The driver can copy encrypted data from one resource to another, decrypting the data as part of the process.
The hardware supports the protection of specific resources. This means that:
Note??This enumeration value is supported starting with Windows?10.
Physical pages of a protected resource can be evicted and potentially paged to disk in low memory conditions without losing the contents of the resource when paged back in.
Note??This enumeration value is supported starting with Windows?10.
The hardware supports an automatic teardown mechanism that could trigger hardware keys or protected content to become lost in some conditions. The application can register to be notified when these events occur.
Note??This enumeration value is supported starting with Windows?10.
The secure environment is tightly coupled with the GPU and an
Note??This enumeration value is supported starting with Windows?10.
Specifies the context in which a query occurs.
-This enum is used by the following:
The query can occur in all contexts.
The query occurs in the context of a 3D command queue.
The query occurs in the context of a 3D compute queue.
The query occurs in the context of a 3D copy queue.
The query occurs in the context of video.
Specifies how to handle the existing contents of a resource during a copy or update operation of a region within that resource.
-The existing contents of the resource cannot be overwritten.
The existing contents of the resource are undefined and can be discarded.
Options for performance counters.
-Independent hardware vendors may define their own set of performance counters for their devices, by giving the enumeration value a number that is greater than the value for
This enumeration is used by
Define a performance counter that is dependent on the hardware device.
Data type of a performance counter.
-These flags are an output parameter in
32-bit floating point.
16-bit unsigned integer.
32-bit unsigned integer.
64-bit unsigned integer.
Specifies the types of CPU access allowed for a resource.
-This enumeration is used in
Applications may combine one or more of these flags with a logical OR. When possible, create resources with no CPU access flags, as this enables better resource optimization.
The
The resource is to be mappable so that the CPU can change its contents. Resources created with this flag cannot be set as outputs of the pipeline and must be created with either dynamic or staging usage (see
The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see
Describes flags that are used to create a device context state object (
Represents the status of an
Indicates triangles facing a particular direction are not drawn.
-This enumeration is part of a rasterizer-state object description (see
Always draw all triangles.
Do not draw triangles that are front-facing.
Do not draw triangles that are back-facing.
Specifies the parts of the depth stencil to clear.
- These flags are used when calling
Clear the depth buffer, using fast clear if possible, then place the resource in a compressed state.
Clear the stencil buffer, using fast clear if possible, then place the resource in a compressed state.
Specifies how to access a resource used in a depth-stencil view.
-This enumeration is used in
The resource will be accessed as a 1D texture.
The resource will be accessed as an array of 1D textures.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
The resource will be accessed as a 2D texture with multisampling.
The resource will be accessed as an array of 2D textures with multisampling.
Depth-stencil view options.
-This enumeration is used by
Limiting a depth-stencil buffer to read-only access allows more than one depth-stencil view to be bound to the pipeline simultaneously, since it is not possible to have a read/write conflicts between separate views.
-Indicates that depth values are read only.
Indicates that stencil values are read only.
Identify the portion of a depth-stencil buffer for writing depth data.
-Turn off writes to the depth-stencil buffer.
Turn on writes to the depth-stencil buffer.
Device context options.
-This enumeration is used by
The device context is an immediate context.
The device context is a deferred context.
Describes parameters that are used to create a device.
-Device creation flags are used by
An application might dynamically create (and destroy) threads to improve performance especially on a machine with multiple CPU cores. There may be cases, however, when an application needs to prevent extra threads from being created. This can happen when you want to simplify debugging, profile code or develop a tool for instance. For these cases, use
Use this flag if your application will only call methods of Direct3D?11 interfaces from a single thread. By default, the
Creates a device that supports the debug layer.
To use this flag, you must have D3D11*SDKLayers.dll installed; otherwise, device creation fails. To get D3D11_1SDKLayers.dll, install the SDK for Windows?8.
Prevents multiple threads from being created. When this flag is used with a Windows Advanced Rasterization Platform (WARP) device, no additional threads will be created by WARP and all rasterization will occur on the calling thread. This flag is not recommended for general use. See remarks.
Creates a device that supports BGRA formats (
Causes the device and driver to keep information that you can use for shader debugging. The exact impact from this flag will vary from driver to driver.
To use this flag, you must have D3D11_1SDKLayers.dll installed; otherwise, device creation fails. The created device supports the debug layer. To get D3D11_1SDKLayers.dll, install the SDK for Windows?8.
If you use this flag and the current driver does not support shader debugging, device creation fails. Shader debugging requires a driver that is implemented to the WDDM for Windows?8 (WDDM 1.2).
Direct3D 11:??This value is not supported until Direct3D 11.1.
Causes the Direct3D runtime to ignore registry settings that turn on the debug layer. You can turn on the debug layer by using the DirectX Control Panel that was included as part of the DirectX SDK. We shipped the last version of the DirectX SDK in June 2010; you can download it from the Microsoft Download Center. You can set this flag in your app, typically in release builds only, to prevent end users from using the DirectX Control Panel to monitor how the app uses Direct3D.
Note??You can also set this flag in your app to prevent Direct3D debugging tools, such as Visual Studio Ultimate?2012, from hooking your app. ?Windows?8.1:??This flag doesn't prevent Visual Studio?2013 and later running on Windows?8.1 and later from hooking your app; instead use
Direct3D 11:??This value is not supported until Direct3D 11.1.
Use this flag if the device will produce GPU workloads that take more than two seconds to complete, and you want the operating system to allow them to successfully finish. If this flag is not set, the operating system performs timeout detection and recovery when it detects a GPU packet that took more than two seconds to execute. If this flag is set, the operating system allows such a long running packet to execute without resetting the GPU. We recommend not to set this flag if your device needs to be highly responsive so that the operating system can detect and recover from GPU timeouts. We recommend to set this flag if your device needs to perform time consuming background tasks such as compute, image recognition, and video encoding to allow such tasks to successfully finish.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Forces the creation of the Direct3D device to fail if the display driver is not implemented to the WDDM for Windows?8 (WDDM 1.2). When the display driver is not implemented to WDDM 1.2, only a Direct3D device that is created with feature level 9.1, 9.2, or 9.3 supports video; therefore, if this flag is set, the runtime creates the Direct3D device only for feature level 9.1, 9.2, or 9.3. We recommend not to specify this flag for applications that want to favor Direct3D capability over video. If feature level 10 and higher is available, the runtime will use that feature level regardless of video support.
If this flag is set, device creation on the Basic Render Device (BRD) will succeed regardless of the BRD's missing support for video decode. This is because the Media Foundation video stack operates in software mode on BRD. In this situation, if you force the video stack to create the Direct3D device twice (create the device once with this flag, next discover BRD, then again create the device without the flag), you actually degrade performance.
If you attempt to create a Direct3D device with driver type
Direct3D 11:??This value is not supported until Direct3D 11.1.
Direct3D 11 feature options.
- This enumeration is used when querying a driver about support for these features by calling
The driver supports multithreading. To see an example of testing a driver for multithread support, see How To: Check for Driver Support. Refer to
Supports the use of the double-precision shaders in HLSL. Refer to
Supports the formats in
Supports the formats in
Supports compute shaders and raw and structured buffers. Refer to
Supports Direct3D 11.1 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports specific adapter architecture. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D?9 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports minimum precision of shaders. For more info about HLSL minimum precision, see using HLSL minimum precision. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D?9 shadowing feature. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D 11.2 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.2 instancing options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.2 marker options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D?9 feature options, which includes the Direct3D?9 shadowing feature and instancing support. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.3 conservative rasterization feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.3.
Supports Direct3D 11.4 conservative rasterization feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.4.
Supports GPU virtual addresses. Refer to
Supports a single boolean for NV12 shared textures. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.4.
Device context options.
-This enumeration is used by
The device context is an immediate context.
The device context is a deferred context.
Determines the fill mode to use when rendering triangles.
-This enumeration is part of a rasterizer-state object description (see
Draw lines connecting the vertices. Adjacent vertices are not drawn.
Fill the triangles formed by the vertices. Adjacent vertices are not drawn.
Filtering options during texture sampling.
-During texture sampling, one or more texels are read and combined (this is calling filtering) to produce a single value. Point sampling reads a single texel while linear sampling reads two texels (endpoints) and linearly interpolates a third value between the endpoints.
HLSL texture-sampling functions also support comparison filtering during texture sampling. Comparison filtering compares each sampled texel against a comparison value. The boolean result is blended the same way that normal texture filtering is blended.
You can use HLSL intrinsic texture-sampling functions that implement texture filtering only or companion functions that use texture filtering with comparison filtering.
Texture Sampling Function | Texture Sampling Function with Comparison Filtering |
---|---|
sample | samplecmp or samplecmplevelzero |
?
Comparison filters only work with textures that have the following DXGI formats: R32_FLOAT_X8X24_TYPELESS, R32_FLOAT, R24_UNORM_X8_TYPELESS, R16_UNORM.
-Use point sampling for minification, magnification, and mip-level sampling.
Use point sampling for minification and magnification; use linear interpolation for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification; use point sampling for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification; use linear interpolation for mip-level sampling.
Use linear interpolation for minification and magnification; use point sampling for mip-level sampling.
Use linear interpolation for minification, magnification, and mip-level sampling.
Use anisotropic interpolation for minification, magnification, and mip-level sampling.
Use point sampling for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification and magnification; use linear interpolation for mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification; use linear interpolation for magnification; use point sampling for mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification; use linear interpolation for magnification and mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification; use point sampling for magnification and mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification; use point sampling for magnification; use linear interpolation for mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification and magnification; use point sampling for mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Use anisotropic interpolation for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Specifies the type of sampler filter reduction.
- This enum is used by the
Indicates standard (default) filter reduction.
Indicates a comparison filter reduction.
Indicates minimum filter reduction.
Indicates maximum filter reduction.
Types of magnification or minification sampler filters.
-Point filtering used as a texture magnification or minification filter. The texel with coordinates nearest to the desired pixel value is used. The texture filter to be used between mipmap levels is nearest-point mipmap filtering. The rasterizer uses the color from the texel of the nearest mipmap texture.
Bilinear interpolation filtering used as a texture magnification or minification filter. A weighted average of a 2 x 2 area of texels surrounding the desired pixel is used. The texture filter to use between mipmap levels is trilinear mipmap interpolation. The rasterizer linearly interpolates pixel color, using the texels of the two nearest mipmap textures.
Which resources are supported for a given format and given device (see
Type of data contained in an input slot.
-Use these values to specify the type of data for a particular input element (see
Input data is per-vertex data.
Input data is per-instance data.
Specifies logical operations to configure for a render target.
-Clears the render target.
Sets the render target.
Copys the render target.
Performs an inverted-copy of the render target.
No operation is performed on the render target.
Inverts the render target.
Performs a logical AND operation on the render target.
Performs a logical NAND operation on the render target.
Performs a logical OR operation on the render target.
Performs a logical NOR operation on the render target.
Performs a logical XOR operation on the render target.
Performs a logical equal operation on the render target.
Performs a logical AND and reverse operation on the render target.
Performs a logical AND and invert operation on the render target.
Performs a logical OR and reverse operation on the render target.
Performs a logical OR and invert operation on the render target.
Specifies how the CPU should respond when an application calls the
This enumeration is used by
Identifies a resource to be accessed for reading and writing by the CPU. Applications may combine one or more of these flags.
-This enumeration is used in
These remarks are divided into the following topics:
Resource is mapped for reading. The resource must have been created with read access (see
Resource is mapped for writing. The resource must have been created with write access (see
Resource is mapped for reading and writing. The resource must have been created with read and write access (see
Resource is mapped for writing; the previous contents of the resource will be undefined. The resource must have been created with write access and dynamic usage (See
Resource is mapped for writing; the existing contents of the resource cannot be overwritten (see Remarks). This flag is only valid on vertex and index buffers. The resource must have been created with write access (see
Categories of debug messages. This will identify the category of a message when retrieving a message with
This is part of the Information Queue feature. See
Debug message severity levels for an information queue.
-Use these values to allow or deny message categories to pass through the storage and retrieval filters for an information queue (see
Defines some type of corruption which has occurred.
Defines an error message.
Defines a warning message.
Defines an information message.
Defines a message other than corruption, error, warning, or information.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Flags that describe miscellaneous query behavior.
-This flag is part of a query description (see
Tell the hardware that if it is not yet sure if something is hidden or not to draw it anyway. This is only used with an occlusion predicate. Predication data cannot be returned to your application via
Query types.
- Create a query with
Determines whether or not the GPU is finished processing commands. When the GPU is finished processing commands
Get the number of samples that passed the depth and stencil tests in between
Get a timestamp value where
Determines whether or not a
Get pipeline statistics, such as the number of pixel shader invocations in between
Similar to
Get streaming output statistics, such as the number of primitives streamed out in between
Determines whether or not any of the streaming output buffers overflowed in between
Get streaming output statistics for stream 0, such as the number of primitives streamed out in between
Determines whether or not the stream 0 output buffers overflowed in between
Get streaming output statistics for stream 1, such as the number of primitives streamed out in between
Determines whether or not the stream 1 output buffers overflowed in between
Get streaming output statistics for stream 2, such as the number of primitives streamed out in between
Determines whether or not the stream 2 output buffers overflowed in between
Get streaming output statistics for stream 3, such as the number of primitives streamed out in between
Determines whether or not the stream 3 output buffers overflowed in between
These flags identify the type of resource that will be viewed as a render target.
-This enumeration is used in
Do not use this value, as it will cause
The resource will be accessed as a buffer.
The resource will be accessed as a 1D texture.
The resource will be accessed as an array of 1D textures.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
The resource will be accessed as a 2D texture with multisampling.
The resource will be accessed as an array of 2D textures with multisampling.
The resource will be accessed as a 3D texture.
Options for the amount of information to report about a device object's lifetime.
- This enumeration is used by
Several inline functions exist to combine the options using operators, see the D3D11SDKLayers.h header file for details.
-Specifies to obtain a summary about a device object's lifetime.
Specifies to obtain detailed information about a device object's lifetime.
Do not use this enumeration constant. It is for internal use only.
Identifies the type of resource being used.
-This enumeration is used in
Resource is of unknown type.
Resource is a buffer.
Resource is a 1D texture.
Resource is a 2D texture.
Resource is a 3D texture.
Identifies options for resources.
- This enumeration is used in
These flags can be combined by bitwise OR.
The
Enables MIP map generation by using
Enables resource data sharing between two or more Direct3D devices. The only resources that can be shared are 2D non-mipmapped textures.
WARP and REF devices do not support shared resources. If you try to create a resource with this flag on either a WARP or REF device, the create method will return an E_OUTOFMEMORY error code.
Note?? Starting with Windows?8, WARP devices fully support shared resources. ? Note?? Starting with Windows?8, we recommend that you enable resource data sharing between two or more Direct3D devices by using a combination of theSets a resource to be a cube texture created from a Texture2DArray that contains 6 textures.
Enables instancing of GPU-generated content.
Enables a resource as a byte address buffer.
Enables a resource as a structured buffer.
Enables a resource with MIP map clamping for use with
Enables the resource to be synchronized by using the
If you call any of these methods with the
WARP and REF devices do not support shared resources. If you try to create a resource with this flag on either a WARP or REF device, the create method will return an E_OUTOFMEMORY error code.
Note?? Starting with Windows?8, WARP devices fully support shared resources. ? Enables a resource compatible with GDI. You must set the
Consider the following programming tips for using
You must set the texture format to one of the following types.
Set this flag to enable the use of NT HANDLE values when you create a shared resource. By enabling this flag, you deprecate the use of existing HANDLE values.
When you use this flag, you must combine it with the
Without this flag set, the runtime does not strictly validate shared resource parameters (that is, formats, flags, usage, and so on). When the runtime does not validate shared resource parameters, behavior of much of the Direct3D API might be undefined and might vary from driver to driver.
Direct3D 11 and earlier:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource might contain protected content; therefore, the operating system should use the resource only when the driver and hardware support content protection. If the driver and hardware do not support content protection and you try to create a resource with this flag, the resource creation fails.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the operating system restricts access to the shared surface. You can use this flag together with the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the driver restricts access to the shared surface. You can use this flag in conjunction with the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource is guarded. Such a resource is returned by the
A guarded resource automatically restricts all writes to the region that is related to one of the preceding APIs. Additionally, the resource enforces access to the ROI with these restrictions:
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource is a tile pool.
Direct3D 11:??This value is not supported until Direct3D 11.2.
Set this flag to indicate that the resource is a tiled resource.
Direct3D 11:??This value is not supported until Direct3D 11.2.
Set this flag to indicate that the resource should be created such that it will be protected by the hardware. Resource creation will fail if hardware content protection is not supported.
This flag has the following restrictions:
Creating a texture using this flag does not automatically guarantee that hardware protection will be enabled for the underlying allocation. Some implementations require that the DRM components are first initialized prior to any guarantees of protection.
?Note?? This enumeration value is supported starting with Windows?10.
Identifies expected resource use during rendering. The usage directly reflects whether a resource is accessible by the CPU and/or the graphics processing unit (GPU).
-An application identifies the way a resource is intended to be used (its usage) in a resource description. There are several structures for creating resources including:
Differences between Direct3D 9 and Direct3D 10/11: In Direct3D 9, you specify the type of memory a resource should be created in at resource creation time (using D3DPOOL). It was an application's job to decide what memory pool would provide the best combination of functionality and performance. In Direct3D 10/11, an application no longer specifies what type of memory (the pool) to create a resource in. Instead, you specify the intended usage of the resource, and let the runtime (in concert with the driver and a memory manager) choose the type of memory that will achieve the best performance. |
?
-A resource that requires read and write access by the GPU. This is likely to be the most common usage choice.
A resource that can only be read by the GPU. It cannot be written by the GPU, and cannot be accessed at all by the CPU. This type of resource must be initialized when it is created, since it cannot be changed after creation.
A resource that is accessible by both the GPU (read only) and the CPU (write only). A dynamic resource is a good choice for a resource that will be updated by the CPU at least once per frame. To update a dynamic resource, use a Map method.
For info about how to use dynamic resources, see How to: Use dynamic resources.
A resource that supports data transfer (copy) from the GPU to the CPU.
Describes the level of support for shader caching in the current graphics driver.
-This enum is used by the D3D_FEATURE_DATA_SHADER_CACHE structure.
-Indicates that the driver does not support shader caching.
Indicates that the driver supports an OS-managed shader cache that stores compiled shaders in memory during the current run of the application.
Indicates that the driver supports an OS-managed shader cache that stores compiled shaders on disk to accelerate future runs of the application.
Values that specify minimum precision levels at shader stages.
-Minimum precision level is 10-bit.
Minimum precision level is 16-bit.
Identifies how to view a buffer resource.
-This enumeration is used by
View the buffer as raw. For more info about raw viewing of buffers, see Raw Views of Buffers.
Options that specify how to perform shader debug tracking.
-This enumeration is used by the following methods:
No debug tracking is performed.
Track the reading of uninitialized data.
Track read-after-write hazards.
Track write-after-read hazards.
Track write-after-write hazards.
Track that hazards are allowed in which data is written but the value does not change.
Track that only one type of atomic operation is used on an address.
Track read-after-write hazards across thread groups.
Track write-after-read hazards across thread groups.
Track write-after-write hazards across thread groups.
Track that only one type of atomic operation is used on an address across thread groups.
Track hazards that are specific to unordered access views (UAVs).
Track all hazards.
Track all hazards and track that hazards are allowed in which data is written but the value does not change.
All of the preceding tracking options are set except
Indicates which resource types to track.
-The
No resource types are tracked.
Track device memory that is created with unordered access view (UAV) bind flags.
Track device memory that is created without UAV bind flags.
Track all device memory.
Track all shaders that use group shared memory.
Track all device memory except device memory that is created without UAV bind flags.
Track all device memory except device memory that is created with UAV bind flags.
Track all memory on the device.
Specifies a multi-sample pattern type.
-An app calls
The runtime defines the following standard sample patterns for 1(trivial), 2, 4, 8, and 16 sample counts. Hardware must support 1, 4, and 8 sample counts. Hardware vendors can expose more sample counts beyond these. However, if vendors support 2, 4(required), 8(required), or 16, they must also support the corresponding standard pattern or center pattern for each of those sample counts.
-Pre-defined multi-sample patterns required for Direct3D?11 and Direct3D?10.1 hardware.
Pattern where all of the samples are located at the pixel center.
The stencil operations that can be performed during depth-stencil testing.
-Keep the existing stencil data.
Set the stencil data to 0.
Set the stencil data to the reference value set by calling
Increment the stencil value by 1, and clamp the result.
Decrement the stencil value by 1, and clamp the result.
Invert the stencil data.
Increment the stencil value by 1, and wrap the result if necessary.
Decrement the stencil value by 1, and wrap the result if necessary.
Identify a technique for resolving texture coordinates that are outside of the boundaries of a texture.
-Tile the texture at every (u,v) integer junction. For example, for u values between 0 and 3, the texture is repeated three times.
Flip the texture at every (u,v) integer junction. For u values between 0 and 1, for example, the texture is addressed normally; between 1 and 2, the texture is flipped (mirrored); between 2 and 3, the texture is normal again; and so on.
Texture coordinates outside the range [0.0, 1.0] are set to the texture color at 0.0 or 1.0, respectively.
Texture coordinates outside the range [0.0, 1.0] are set to the border color specified in
Similar to
The different faces of a cube texture.
-Positive X face.
Negative X face.
Positive Y face.
Negative Y face.
Positive Z face.
Negative Z face.
Specifies texture layout options.
-This enumeration controls the swizzle pattern of default textures and enable map support on default textures. Callers must query
The standard swizzle formats applies within each page-sized chunk, and pages are laid out in linear order with respect to one another. A 16-bit interleave pattern defines the conversion from pre-swizzled intra-page location to the post-swizzled location.
To demonstrate, consider the 32bpp swizzle format above. This is represented by the following interleave masks, where bits on the left are most-significant.
UINT xBytesMask = 1010 1010 1000 1111
- UINT yMask = 0101 0101 0111 0000
-
To compute the swizzled address, the following code could be used (where the _pdep_u32 instruction is supported):
UINT swizzledOffset = resourceBaseOffset + _pdep_u32(xOffset, xBytesMask) + _pdep_u32(yOffset, yBytesMask);
-
- The texture layout is undefined, and is selected by the driver.
Data for the texture is stored in row major (sometimes called pitch-linear) order.
A default texture uses the standardized swizzle pattern.
Identifies how to copy a tile.
-Indicates that the GPU isn't currently referencing any of the portions of destination memory being written. -
Indicates that the
Indicates that the
Indicates the tier level at which tiled resources are supported.
-Tiled resources are not supported.
Tier_1 tiled resources are supported.
The device supports calls to CreateTexture2D and so on with the
The device supports calls to CreateBuffer with the
If you access tiles (read or write) that are
Tier_2 tiled resources are supported.
Superset of Tier_1 functionality, which includes this additional support:
Tier_3 tiled resources are supported.
Superset of Tier_2 functionality, Tier 3 is essentially Tier 2 but with the additional support of Texture3D for Tiled Resources.
Identifies how to perform a tile-mapping operation.
-Indicates that no overwriting of tiles occurs in the tile-mapping operation.
Specifies a range of tile mappings to use with
Identifies unordered-access view options for a buffer resource.
-Resource contains raw, unstructured data. Requires the UAV format to be
Allow data to be appended to the end of the buffer.
Adds a counter to the unordered-access-view buffer.
Unordered-access view options.
- This enumeration is used by a unordered access-view description (see
The view type is unknown.
View the resource as a buffer.
View the resource as a 1D texture.
View the resource as a 1D texture array.
View the resource as a 2D texture.
View the resource as a 2D texture array.
View the resource as a 3D texture array.
Specifies how to access a resource that is used in a video decoding output view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture. -
Specifies a type of compressed buffer for decoding.
-Picture decoding parameter buffer. -
Macroblock control command buffer. -
Residual difference block data buffer. -
Deblocking filter control command buffer. -
Inverse quantization matrix buffer. -
Slice-control buffer. -
Bitstream data buffer. -
Motion vector buffer. -
Film grain synthesis data buffer. -
Specifies capabilities of the video decoder.
-Indicates that the graphics driver supports at least a subset of downsampling operations.
Indicates that the decoding hardware cannot support the decode operation in real-time. Decoding is still supported for transcoding scenarios. With this capability, it is possible that decoding can occur in real-time if downsampling is enabled. -
Indicates that the driver supports changing down sample parameters after the initial down sample parameters have been applied. For more information, see
Describes how a video stream is interlaced.
-Frames are progressive.
Frames are interlaced. The top field of each frame is displayed first.
Frame are interlaced. The bottom field of each frame is displayed first.
Specifies the alpha fill mode for video processing.
-Alpha values inside the target rectangle are set to opaque.
Alpha values inside the target rectangle are set to the alpha value specified in the background color. To set the background color, call the
Existing alpha values remain unchanged in the output surface.
Alpha values are taken from an input stream, scaled, and copied to the corresponding destination rectangle for that stream. The input stream is specified in the StreamIndex parameter of the
If the input stream does not have alpha data, the video processor sets the alpha values in the target rectangle to opaque. If the input stream is disabled or the source rectangle is empty, the alpha values in the target rectangle are not modified.
Specifies the automatic image processing capabilities of the video processor.
-Denoise.
Deringing.
Edge enhancement.
Color correction.
Flesh-tone mapping.
Image stabilization.
Enhanced image resolution.
Anamorphic scaling.
Specifies flags that indicate the most efficient methods for performing video processing operations.
-Multi-plane overlay hardware can perform the rotation operation more efficiently than the
Multi-plane overlay hardware can perform the scaling operation more efficiently than the
Multi-plane overlay hardware can perform the colorspace conversion operation more efficiently than the
The video processor output data should be at least triple buffered for optimal performance.
Defines video processing capabilities for a Microsoft Direct3D?11 video processor.
-The video processor can blend video content in linear color space. Most video content is gamma corrected, resulting in nonlinear values. This capability flag means that the video processor converts colors to linear space before blending, which produces better results.
The video processor supports the xvYCC color space for YCbCr data.
The video processor can perform range conversion when the input and output are both RGB but use different color ranges (0-255 or 16-235, for 8-bit RGB).
The video processor can apply a matrix conversion to YCbCr values when the input and output are both YCbCr. For example, the driver can convert colors from BT.601 to BT.709.
The video processor supports YUV nominal range .
Supported in Windows?8.1 and later.
Defines features that a Microsoft Direct3D?11 video processor can support.
-The video processor can set alpha values on the output pixels. For more information, see
The video processor can downsample the video output. For more information, see
The video processor can perform luma keying. For more information, see
The video processor can apply alpha values from color palette entries.
The driver does not support full video processing capabilities. If this capability flag is set, the video processor has the following limitations:
The video processor can support 3D stereo video. For more information, see
All drivers setting this caps must support the following stereo formats:
The driver can rotate the input data either 90, 180, or 270 degrees clockwise as part of the video processing operation.
The driver supports the VideoProcessorSetStreamAlpha call.
The driver supports the VideoProcessorSetStreamPixelAspectRatio call.
Identifies a video processor filter.
-Brightness filter.
Contrast filter.
Hue filter.
Saturation filter.
Noise reduction filter.
Edge enhancement filter.
Anamorphic scaling filter.
Stereo adjustment filter. When stereo 3D video is enabled, this filter adjusts the offset between the left and right views, allowing the user to reduce potential eye strain.
The filter value indicates the amount by which the left and right views are adjusted. A positive value shifts the images away from each other: the left image toward the left, and the right image toward the right. A negative value shifts the images in the opposite directions, closer to each other.
Defines image filter capabilities for a Microsoft Direct3D?11 video processor.
-These capability flags indicate support for the image filters defined by the
The video processor can adjust the brightness level.
The video processor can adjust the contrast level.
The video processor can adjust hue.
The video processor can adjust the saturation level.
The video processor can perform noise reduction.
The video processor can perform edge enhancement.
The video processor can perform anamorphic scaling. Anamorphic scaling can be used to stretch 4:3 content to a widescreen 16:9 aspect ratio.
For stereo 3D video, the video processor can adjust the offset between the left and right views, allowing the user to reduce potential eye strain.
Defines capabilities related to input formats for a Microsoft Direct3D?11 video processor.
-These flags define video processing capabilities that usually are not needed, and that video devices are therefore not required to support.
The first three flags relate to RGB support for functions that are normally applied to YCbCr video: deinterlacing, color adjustment, and luma keying. A device that supports these functions for YCbCr is not required to support them for RGB input. Supporting RGB input for these functions is an additional capability, reflected by these constants. Note that the driver might convert the input to another color space, perform the indicated function, and then convert the result back to RGB.
Similarly, a device that supports deinterlacing is not required to support deinterlacing of palettized formats. This capability is indicated by the
The video processor can deinterlace an input stream that contains interlaced RGB video.
The video processor can perform color adjustment on RGB video.
The video processor can perform luma keying on RGB video.
The video processor can deinterlace input streams with palettized color formats.
Specifies how a video format can be used for video processing.
-The format can be used as the input to the video processor.
The format can be used as the output from the video processor.
Specifies the inverse telecine (IVTC) capabilities of a video processor.
-The video processor can reverse 3:2 pulldown.
The video processor can reverse 2:2 pulldown.
The video processor can reverse 2:2:2:4 pulldown.
The video processor can reverse 2:3:3:2 pulldown.
The video processor can reverse 3:2:3:2:2 pulldown.
The video processor can reverse 5:5 pulldown.
The video processor can reverse 6:4 pulldown.
The video processor can reverse 8:7 pulldown.
The video processor can reverse 2:2:2:2:2:2:2:2:2:2:2:3 pulldown.
The video processor can reverse other telecine modes not listed here.
Specifies values for the luminance range of YUV data.
-Driver defaults are used, which should be Studio luminance range [16-235],
Studio luminance range [16-235]
Full luminance range [0-255]
Specifies the rate at which the video processor produces output frames from an input stream.
-The output is the normal frame rate.
The output is half the frame rate.
The output is a custom frame rate.
Specifies video processing capabilities that relate to deinterlacing, inverse telecine (IVTC), and frame-rate conversion.
-The video processor can perform blend deinterlacing.
In blend deinterlacing, the two fields from an interlaced frame are blended into a single progressive frame. A video processor uses blend deinterlacing when it deinterlaces at half rate, as when converting 60i to 30p. Blend deinterlacing does not require reference frames.
The video processor can perform bob deinterlacing.
In bob deinterlacing, missing field lines are interpolated from the lines above and below. Bob deinterlacing does not require reference frames.
The video processor can perform adaptive deinterlacing.
Adaptive deinterlacing uses spatial or temporal interpolation, and switches between the two on a field-by-field basis, depending on the amount of motion. If the video processor does not receive enough reference frames to perform adaptive deinterlacing, it falls back to bob deinterlacing.
The video processor can perform motion-compensated deinterlacing.
Motion-compensated deinterlacing uses motion vectors to recreate missing lines. If the video processor does not receive enough reference frames to perform motion-compensated deinterlacing, it falls back to bob deinterlacing.
The video processor can perform inverse telecine (IVTC).
If the video processor supports this capability, the ITelecineCaps member of the
The video processor can convert the frame rate by interpolating frames.
Specifies the video rotation states.
-The video is not rotated.
The video is rotated 90 degrees clockwise.
The video is rotated 180 degrees clockwise.
The video is rotated 270 degrees clockwise.
Defines stereo 3D capabilities for a Microsoft Direct3D?11 video processor.
-The video processor supports the
The video processor supports the
The video processor supports the
The video processor supports the
The video processor can flip one or both views. For more information, see
For stereo 3D video, specifies whether the data in frame 0 or frame 1 is flipped, either horizontally or vertically.
-Neither frame is flipped.
The data in frame 0 is flipped.
The data in frame 1 is flipped.
Specifies the layout in memory of a stereo 3D video frame.
-This enumeration designates the two stereo views as "frame 0" and "frame 1". The LeftViewFrame0 parameter of the VideoProcessorSetStreamStereoFormat method specifies which view is the left view, and which is the right view.
For packed formats, if the source rectangle clips part of the surface, the driver interprets the rectangle in logical coordinates relative to the stereo view, rather than absolute pixel coordinates. The result is that frame 0 and frame 1 are clipped proportionately.
To query whether the device supports stereo 3D video, call
The sample does not contain stereo data. If the stereo format is not specified, this value is the default.
Frame 0 and frame 1 are packed side-by-side, as shown in the following diagram.
All drivers that support stereo video must support this format.
Frame 0 and frame 1 are packed top-to-bottom, as shown in the following diagram.
All drivers that support stereo video must support this format.
Frame 0 and frame 1 are placed in separate resources or in separate texture array elements within the same resource.
All drivers that support stereo video must support this format.
The sample contains non-stereo data. However, the driver should create a left/right output of this sample using a specified offset. The offset is specified in the MonoOffset parameter of the
This format is primarily intended for subtitles and other subpicture data, where the entire sample is presented on the same plane.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed into interleaved rows, as shown in the following diagram.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed into interleaved columns, as shown in the following diagram.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed in a checkerboard format, as shown in the following diagram.
Support for this stereo format is optional.
Specifies the intended use for a video processor.
-Normal video playback. The graphics driver should expose a set of capabilities that are appropriate for real-time video playback.
Optimal speed. The graphics driver should expose a minimal set of capabilities that are optimized for performance.
Use this setting if you want better performance and can accept some reduction in video quality. For example, you might use this setting in power-saving mode or to play video thumbnails.
Optimal quality. The grahics driver should expose its maximum set of capabilities.
Specify this setting to get the best video quality possible. It is appropriate for tasks such as video editing, when quality is more important than speed. It is not appropriate for real-time playback.
Specifies how to access a resource that is used in a video processor input view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture.
Specifies how to access a resource that is used in a video processor output view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
Creates a device that represents the display adapter.
- A reference to the video adapter to use when creating a device. Pass
The
A handle to a DLL that implements a software rasterizer. If DriverType is
The runtime layers to enable (see
A reference to an array of
{Note?? If the Direct3D 11.1 runtime is present on the computer and pFeatureLevels is set to, , , , , ,};
The number of elements in pFeatureLevels.
The SDK version; use
Returns the address of a reference to an
If successful, returns the first
Returns the address of a reference to an
This method can return one of the Direct3D 11 Return Codes.
This method returns E_INVALIDARG if you set the pAdapter parameter to a non-
This method returns
This entry-point is supported by the Direct3D 11 runtime, which is available on Windows 7, Windows Server 2008 R2, and as an update to Windows Vista (KB971644).
To create a Direct3D 11.1 device (
To create a Direct3D 11.2 device (
Set ppDevice and ppImmediateContext to
For an example, see How To: Create a Device and Immediate Context; to create a device and a swap chain at the same time, use D3D11CreateDeviceAndSwapChain.
If you set the pAdapter parameter to a non-
Differences between Direct3D 10 and Direct3D 11: In Direct3D 10, the presence of pAdapter dictated which adapter to use and the DriverType could mismatch what the adapter was. In Direct3D 11, if you are trying to create a hardware or a software device, set pAdapter !=
On the other hand, if pAdapter ==
|
?
The function signature PFN_D3D11_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
Windows?Phone?8: This API is supported.
Windows Phone 8.1: This API is supported.
-Creates a device that uses Direct3D 11 functionality in Direct3D 12, specifying a pre-existing D3D12 device to use for D3D11 interop.
- Specifies a pre-existing D3D12 device to use for D3D11 interop. May not be
One or more bitwise OR'ed flags from
An array of any of the following:
The first feature level which is less than or equal to the D3D12 device's feature level will be used to perform D3D11 validation. Creation will fail if no acceptable feature levels are provided. Providing
The size of the feature levels array, in bytes.
An array of unique queues for D3D11On12 to use. Valid queue types: 3D command queue.
The size of the command queue array, in bytes.
Which node of the D3D12 device to use. Only 1 bit may be set.
Pointer to the returned
A reference to the returned
A reference to the returned feature level. May be
This method returns one of the Direct3D 12 Return Codes that are documented for
This method returns
The function signature PFN_D3D11ON12_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
-This interface encapsulates methods for retrieving data from the GPU asynchronously.
-There are three types of asynchronous interfaces, all of which inherit this interface:
Get the size of the data (in bytes) that is output when calling
Get the size of the data (in bytes) that is output when calling
Size of the data (in bytes) that is output when calling GetData.
Provides a communication channel with the graphics driver or the Microsoft Direct3D runtime.
-To get a reference to this interface, call
Gets the size of the driver's certificate chain.
-Gets a handle to the authenticated channel.
-Gets the size of the driver's certificate chain.
-Receives the size of the certificate chain, in bytes.
If this method succeeds, it returns
Gets the driver's certificate chain.
-The size of the pCertificate array, in bytes. To get the size of the certificate chain, call
A reference to a byte array that receives the driver's certificate chain. The caller must allocate the array.
If this method succeeds, it returns
Gets a handle to the authenticated channel.
-Receives a handle to the channel.
The
There is no explicit creation method, simply declare an
Gets the initialization flags associated with the deferred context that created the command list.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Gets the initialization flags associated with the deferred context that created the command list.
-The context flag is reserved for future use and is always 0.
The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Represents a cryptographic session.
-To get a reference to this interface, call
Gets the type of encryption that is supported by this session.
-The application specifies the encryption type when it creates the session.
-Gets the decoding profile of the session.
-The application specifies the profile when it creates the session.
-Gets the size of the driver's certificate chain.
-To get the certificate, call
Gets a handle to the cryptographic session.
-You can use this handle to associate the session with a decoder. This enables the decoder to decrypt data that is encrypted using this session.
-Gets the type of encryption that is supported by this session.
-Receives a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
The application specifies the encryption type when it creates the session.
-Gets the decoding profile of the session.
-Receives the decoding profile. For a list of possible values, see
The application specifies the profile when it creates the session.
-Gets the size of the driver's certificate chain.
-Receives the size of the certificate chain, in bytes.
If this method succeeds, it returns
To get the certificate, call
Gets the driver's certificate chain.
-The size of the pCertificate array, in bytes. To get the size of the certificate chain, call
A reference to a byte array that receives the driver's certificate chain. The caller must allocate the array.
If this method succeeds, it returns
Gets a handle to the cryptographic session.
-Receives a handle to the session.
You can use this handle to associate the session with a decoder. This enables the decoder to decrypt data that is encrypted using this session.
-Handles the creation, wrapping and releasing of D3D11 resources for Direct3D 11on12.
-This method creates D3D11 resources for use with D3D 11on12.
-A reference to an already-created D3D12 resource or heap.
A
The use of the resource on input, as a bitwise-OR'd combination of
The use of the resource on output, as a bitwise-OR'd combination of
The globally unique identifier (
After the method returns, points to the newly created wrapped D3D11 resource or heap.
This method returns one of the Direct3D 12 Return Codes.
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
The device interface represents a virtual adapter; it is used to create resources.
Registers the "device removed" event and indicates when a Direct3D device has become removed for any reason, using an asynchronous notification mechanism.
-The handle to the "device removed" event.
A reference to information about the "device removed" event, which can be used in UnregisterDeviceRemoved to unregister the event.
Indicates when a Direct3D device has become removed for any reason, using an asynchronous notification mechanism, rather than as an
Applications register and un-register a Win32 event handle with a particular device. That event handle will be signaled when the device becomes removed. A poll into the device's
ISignalableNotifier or SetThreadpoolWait can be used by UWP apps.
When the graphics device is lost, the app or title will receive the graphics event, so that the app or title knows that its graphics device is no longer valid and it is safe for the app or title to re-create its DirectX devices. In response to this event, the app or title needs to re-create its rendering device and pass it into a SetRenderingDevice call on the composition graphics device objects.
After setting this new rendering device, the app or title needs to redraw content of all the pre-existing surfaces after the composition graphics device's OnRenderingDeviceReplaced event is fired.
This method supports Composition for device loss.
The event is not signaled when it is most ideal to re-create. So, instead, we recommend iterating through the adapter ordinals and creating the first ordinal that will succeed.
The application can register an event with the device. The application will be signaled when the device becomes removed.
If the device is already removed, calls to RegisterDeviceRemovedEvent will signal the event immediately. No device-removed error code will be returned from RegisterDeviceRemovedEvent.
Each "device removed" event is never signaled, or is signaled only once. These events are not signaled during device destruction. These events are unregistered during destruction.
The semantics of RegisterDeviceRemovedEvent are similar to
Unregisters the "device removed" event.
-Information about the "device removed" event, retrieved during a successful RegisterDeviceRemovedEvent call.
See RegisterDeviceRemovedEvent.
-The
The
The
Bind an array of shader resources to the domain-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set an array of sampler states to the domain-shader stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the domain-shader stage.
- Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the DSSetConstantBuffers1 method instead.
Windows?Phone?8: This API is supported.
-Get the domain-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the domain shader currently set on the device.
-Address of a reference to a domain shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the domain-shader stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the domain-shader stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A geometry-shader interface manages an executable program (a geometry shader) that controls the geometry-shader stage.
-The geometry-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a geometry shader interface, call either
This interface is defined in D3D11.h.
-The
Sets the constant buffers used by the geometry shader pipeline stage.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
You can't use the
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the GSSetConstantBuffers1 method instead.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Bind an array of shader resources to the geometry shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set an array of sampler states to the geometry shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the constant buffers used by the geometry shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the geometry shader currently set on the device.
-Address of a reference to a geometry shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the geometry shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the geometry shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A hull-shader interface manages an executable program (a hull shader) that controls the hull-shader stage.
-The hull-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a hull-shader interface, call
This interface is defined in D3D11.h.
-The
Bind an array of shader resources to the hull-shader stage.
-If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set an array of sampler states to the hull-shader stage.
-Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set the constant buffers used by the hull-shader stage.
-The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the HSSetConstantBuffers1 method instead.
-Get the hull-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the hull shader currently set on the device.
-Address of a reference to a hull shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the hull-shader stage.
-Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the hull-shader stage.
-Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-An information-queue interface stores, retrieves, and filters debug messages. The queue consists of a message queue, an optional storage filter stack, and a optional retrieval filter stack.
- To get this interface, turn on debug layer and use IUnknown::QueryInterface from the
Windows?Phone?8: This API is supported.
-Get or sets the maximum number of messages that can be added to the message queue.
-When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Get the number of messages that were allowed to pass through a storage filter.
-Get the number of messages that were denied passage through a storage filter.
-Get the number of messages currently stored in the message queue.
-Get the number of messages that are able to pass through a retrieval filter.
-Get the number of messages that were discarded due to the message count limit.
-Get and set the message count limit with
Get the size of the storage-filter stack in bytes.
-Get the size of the retrieval-filter stack in bytes.
-Get or sets a boolean that turns the debug output on or off.
-Set the maximum number of messages that can be added to the message queue.
-Maximum number of messages that can be added to the message queue. -1 means no limit.
This method returns one of the following Direct3D 11 Return Codes.
When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Clear all messages from the message queue.
-Get a message from the message queue.
-Index into message queue after an optional retrieval filter has been applied. This can be between 0 and the number of messages in the message queue that pass through the retrieval filter (which can be obtained with
Returned message (see
Size of pMessage in bytes, including the size of the message string that the pMessage points to.
This method returns one of the following Direct3D 11 Return Codes.
This method does not remove any messages from the message queue.
This method gets messages from the message queue after an optional retrieval filter has been applied.
Applications should call this method twice to retrieve a message - first to obtain the size of the message and second to get the message. Here is a typical example:
// Get the size of the message -messageLength = 0; - hr = pInfoQueue->GetMessage(0, null , &messageLength); // Allocate space and get the message -* pMessage = ( *)malloc(messageLength); - hr = pInfoQueue->GetMessage(0, pMessage, &messageLength); -
For an overview see Information Queue Overview.
-Get the number of messages that were allowed to pass through a storage filter.
-Number of messages allowed by a storage filter.
Get the number of messages that were denied passage through a storage filter.
-Number of messages denied by a storage filter.
Get the number of messages currently stored in the message queue.
-Number of messages currently stored in the message queue.
Get the number of messages that are able to pass through a retrieval filter.
-Number of messages allowed by a retrieval filter.
Get the number of messages that were discarded due to the message count limit.
-Number of messages discarded.
Get and set the message count limit with
Get the maximum number of messages that can be added to the message queue.
-Maximum number of messages that can be added to the queue. -1 means no limit.
When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Add storage filters to the top of the storage-filter stack.
-Array of storage filters (see
This method returns one of the following Direct3D 11 Return Codes.
Get the storage filter at the top of the storage-filter stack.
-Storage filter at the top of the storage-filter stack.
Size of the storage filter in bytes. If pFilter is
This method returns one of the following Direct3D 11 Return Codes.
Remove a storage filter from the top of the storage-filter stack.
-Push an empty storage filter onto the storage-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
An empty storage filter allows all messages to pass through.
-Push a copy of storage filter currently on the top of the storage-filter stack onto the storage-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
Push a storage filter onto the storage-filter stack.
-Pointer to a storage filter (see
This method returns one of the following Direct3D 11 Return Codes.
Pop a storage filter from the top of the storage-filter stack.
-Get the size of the storage-filter stack in bytes.
-Size of the storage-filter stack in bytes.
Add storage filters to the top of the retrieval-filter stack.
-Array of retrieval filters (see
This method returns one of the following Direct3D 11 Return Codes.
The following code example shows how to use
-cats[] = { ..., ..., ... }; - sevs[] = { ..., ..., ... }; - UINT ids[] = { ..., ..., ... }; filter; - memset( &filter, 0, sizeof(filter) ); // To set the type of messages to allow, - // set filter.AllowList as follows: - filter.AllowList.NumCategories = sizeof(cats / sizeof( )); - filter.AllowList.pCategoryList = cats; - filter.AllowList.NumSeverities = sizeof(sevs / sizeof( )); - filter.AllowList.pSeverityList = sevs; - filter.AllowList.NumIDs = sizeof(ids) / sizeof(UINT); - filter.AllowList.pIDList = ids; // To set the type of messages to deny, set filter.DenyList - // similarly to the preceding filter.AllowList. // The following single call sets all of the preceding information. - hr = infoQueue->AddRetrievalFilterEntries( &filter ); -
Get the retrieval filter at the top of the retrieval-filter stack.
-Retrieval filter at the top of the retrieval-filter stack.
Size of the retrieval filter in bytes. If pFilter is
This method returns one of the following Direct3D 11 Return Codes.
Remove a retrieval filter from the top of the retrieval-filter stack.
-Push an empty retrieval filter onto the retrieval-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
An empty retrieval filter allows all messages to pass through.
-Push a copy of retrieval filter currently on the top of the retrieval-filter stack onto the retrieval-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
Push a retrieval filter onto the retrieval-filter stack.
-Pointer to a retrieval filter (see
This method returns one of the following Direct3D 11 Return Codes.
Pop a retrieval filter from the top of the retrieval-filter stack.
-Get the size of the retrieval-filter stack in bytes.
-Size of the retrieval-filter stack in bytes.
Add a debug message to the message queue and send that message to debug output.
-Category of a message (see
Severity of a message (see
Unique identifier of a message (see
User-defined message.
This method returns one of the following Direct3D 11 Return Codes.
This method is used by the runtime's internal mechanisms to add debug messages to the message queue and send them to debug output. For applications to add their own custom messages to the message queue and send them to debug output, call
Add a user-defined message to the message queue and send that message to debug output.
-Severity of a message (see
Message string.
This method returns one of the following Direct3D 11 Return Codes.
Set a message category to break on when a message with that category passes through the storage filter.
-Message category to break on (see
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Set a message severity level to break on when a message with that severity level passes through the storage filter.
-A
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Set a message identifier to break on when a message with that identifier passes through the storage filter.
-Message identifier to break on (see
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Get a message category to break on when a message with that category passes through the storage filter.
-Message category to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Get a message severity level to break on when a message with that severity level passes through the storage filter.
-Message severity level to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Get a message identifier to break on when a message with that identifier passes through the storage filter.
-Message identifier to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Set a boolean that turns the debug output on or off.
-Disable/Enable the debug output (TRUE to disable or mute the output,
This will stop messages that pass the storage filter from being printed out in the debug output, however those messages will still be added to the message queue.
-Get a boolean that turns the debug output on or off.
-Whether the debug output is on or off (true for on, false for off).
Get a message from the message queue.
-Index into message queue after an optional retrieval filter has been applied. This can be between 0 and the number of messages in the message queue that pass through the retrieval filter (which can be obtained with
Get the storage filter at the top of the storage-filter stack.
-Get the retrieval filter at the top of the retrieval-filter stack.
-An input-layout interface holds a definition of how to feed vertex data that is laid out in memory into the input-assembler stage of the graphics pipeline.
-To create an input-layout object, call
Provides threading protection for critical sections of a multi-threaded application.
-This interface is obtained by querying it from an immediate device context created with the
Unlike D3D10, there is no multithreaded layer in D3D11. By default, multithread protection is turned off. Use SetMultithreadProtected to turn it on, then Enter and Leave to encapsulate graphics commands that must be executed in a specific order.
By default in D3D11, applications can only use one thread with the immediate context at a time. But, applications can use this interface to change that restriction. The interface can turn on threading protection for the immediate context, which will increase the overhead of each immediate context call in order to share one context with multiple threads.
-Find out if multithread protection is turned on or not.
-Enter a device's critical section.
-If SetMultithreadProtected is set to true, then entering a device's critical section prevents other threads from simultaneously calling that device's methods, calling DXGI methods, and calling the methods of all resource, view, shader, state, and asynchronous interfaces.
This function should be used in multithreaded applications when there is a series of graphics commands that must happen in order. This function is typically called at the beginning of the series of graphics commands, and Leave is typically called after those graphics commands.
-Leave a device's critical section.
-This function is typically used in multithreaded applications when there is a series of graphics commands that must happen in order. Enter is typically called at the beginning of a series of graphics commands, and this function is typically called after those graphics commands.
-Turns multithread protection on or off.
-Set to true to turn multithread protection on, false to turn it off.
True if multithread protection was already turned on prior to calling this method, false otherwise.
Find out if multithread protection is turned on or not.
-Returns true if multithread protection is turned on, false otherwise.
A pixel-shader interface manages an executable program (a pixel shader) that controls the pixel-shader stage.
-The pixel-shader interface has no methods; use HLSL to implement your shader functionality. All shaders in are implemented from a common set of features referred to as the common-shader core..
To create a pixel shader interface, call
This interface is defined in D3D11.h.
-The
Bind an array of shader resources to the pixel shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Set an array of sampler states to the pixel shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
State | Default Value |
---|---|
Filter | |
AddressU | |
AddressV | |
AddressW | |
MipLODBias | 0 |
MaxAnisotropy | 1 |
ComparisonFunc | |
BorderColor[0] | 1.0f |
BorderColor[1] | 1.0f |
BorderColor[2] | 1.0f |
BorderColor[3] | 1.0f |
MinLOD | -FLT_MAX |
MaxLOD | FLT_MAX |
?
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the pixel shader pipeline stage.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available on Windows?8 and later operating systems, can bind a larger number of
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
-Bind an array of shader resources to the pixel shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the pixel shader currently set on the device.
- Address of a reference to a pixel shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed, to avoid memory leaks.
Windows?Phone?8: This API is supported.
-Get an array of sampler states from the pixel shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Arry of sampler-state interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the pixel shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A predicate interface determines whether geometry should be processed depending on the results of a previous draw call.
-To create a predicate object, call
There are two types of predicates: stream-output-overflow predicates and occlusion predicates. Stream-output-overflow predicates cause any geometry residing in stream-output buffers that were overflowed to not be processed. Occlusion predicates cause any geometry that did not have a single sample pass the depth/stencil tests to not be processed.
-A query interface queries information from the GPU.
-A query can be created with
Query data is typically gathered by issuing an
There are, however, some queries that do not require calls to Begin. For a list of possible queries see
A query is typically executed as shown in the following code:
queryDesc; - ... // Fill out queryDesc structure - * pQuery; - pDevice->CreateQuery(&queryDesc, &pQuery); - pDeviceContext->Begin(pQuery); ... // Issue graphics commands pDeviceContext->End(pQuery); - UINT64 queryData; // This data type is different depending on the query type while( != pDeviceContext->GetData(pQuery, &queryData, sizeof(UINT64), 0) ) - { - } -
When using a query that does not require a call to Begin, it still requires a call to End. The call to End causes the data returned by GetData to be accurate up until the last call to End.
-Get a query description.
-Get a query description.
-Pointer to a query description (see
Represents a query object for querying information from the graphics processing unit (GPU).
-A query can be created with
Query data is typically gathered by issuing an
There are, however, some queries that do not require calls to Begin. For a list of possible queries see
When using a query that does not require a call to Begin, it still requires a call to End. The call to End causes the data returned by GetData to be accurate up until the last call to End.
-Gets a query description.
-Gets a query description.
-A reference to a
The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-A reference to a
You use the description for rasterizer state in a call to the
Create a rasterizer state object that tells the rasterizer stage how to behave.
-4096 unique rasterizer state objects can be created on a device at a time.
If an application attempts to create a rasterizer-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique rasterizer state objects will stay the same.
-The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage. This rasterizer-state interface supports forced sample count.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-A reference to a
You use the description for rasterizer state in a call to the
The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage. This rasterizer-state interface supports forced sample count and conservative rasterization mode.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
- A reference to a
You use the description for rasterizer state in a call to the
Sets graphics processing unit (GPU) debug reference default tracking options for specific resource types.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference default tracking options for specific resource types.
- A
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-A render-target-view interface identifies the render-target subresources that can be accessed during rendering.
-To create a render-target view, call
A rendertarget is a resource that can be written by the output-merger stage at the end of a render pass. Each render-target should also have a corresponding depth-stencil view.
-Get the properties of a render target view.
-Get the properties of a render target view.
-Pointer to the description of a render target view (see
A render-target-view interface represents the render-target subresources that can be accessed during rendering.
-To create a render-target view, call
A render target is a resource that can be written by the output-merger stage at the end of a render pass. Each render target can also have a corresponding depth-stencil view.
-Gets the properties of a render-target view.
-Gets the properties of a render-target view.
-A reference to a
A resource interface provides common actions on all resources.
-You don't directly create a resource interface; instead, you create buffers and textures that inherit from a resource interface. For more info, see How to: Create a Vertex Buffer, How to: Create an Index Buffer, How to: Create a Constant Buffer, and How to: Create a Texture.
-Get the type of the resource.
-Windows?Phone?8: This API is supported.
-Get or sets the eviction priority of a resource.
-Get the type of the resource.
- Pointer to the resource type (see
Windows?Phone?8: This API is supported.
-Set the eviction priority of a resource.
-Eviction priority for the resource, which is one of the following values:
Resource priorities determine which resource to evict from video memory when the system has run out of video memory. The resource will not be lost; it will be removed from video memory and placed into system memory, or possibly placed onto the hard drive. The resource will be loaded back into video memory when it is required.
A resource that is set to the maximum priority,
Changing the priorities of resources should be done carefully. The wrong eviction priorities could be a detriment to performance rather than an improvement.
-Get the eviction priority of a resource.
-One of the following values, which specifies the eviction priority for the resource:
A view interface specifies the parts of a resource the pipeline can access during rendering.
-A view interface is the base interface for all views. There are four types of views; a depth-stencil view, a render-target view, a shader-resource view, and an unordered-access view.
All resources must be bound to the pipeline before they can be accessed.
Get the resource that is accessed through this view.
-Address of a reference to the resource that is accessed through this view. (See
This function increments the reference count of the resource by one, so it is necessary to call Release on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-Get the resource that is accessed through this view.
-This function increments the reference count of the resource by one, so it is necessary to call Release on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-Get the resource that is accessed through this view.
-This function increments the reference count of the resource by one, so it is necessary to call Dispose on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-The sampler-state interface holds a description for sampler state that you can bind to any shader stage of the pipeline for reference by texture sample operations.
-To create a sampler-state object, call
To bind a sampler-state object to any pipeline shader stage, call the following methods:
You can bind the same sampler-state object to multiple shader stages simultaneously.
-Gets the description for sampler state that you used to create the sampler-state object.
-You use the description for sampler state in a call to the
Gets the description for sampler state that you used to create the sampler-state object.
-A reference to a
You use the description for sampler state in a call to the
A shader-resource-view interface specifies the subresources a shader can access during rendering. Examples of shader resources include a constant buffer, a texture buffer, and a texture.
-To create a shader-resource view, call
A shader-resource view is required when binding a resource to a shader stage; the binding occurs by calling
Get the shader resource view's description.
-Get the shader resource view's description.
-A reference to a
A shader-resource-view interface represents the subresources a shader can access during rendering. Examples of shader resources include a constant buffer, a texture buffer, and a texture.
-To create a shader-resource view, call
A shader-resource view is required when binding a resource to a shader stage; the binding occurs by calling
Gets the shader-resource view's description.
-Gets the shader-resource view's description.
-A reference to a
Reserved.
Reserved.
A 1D texture interface accesses texel data, which is structured memory.
-To create an empty 1D texture, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 2D texture interface manages texel data, which is structured memory.
-To create an empty Texture2D resource, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 2D texture interface represents texel data, which is structured memory.
-To create an empty Texture2D resource, call
Textures can't be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render-target or depth-stencil resource, call
Gets the properties of the texture resource.
-Gets the properties of the texture resource.
-A reference to a
A 3D texture interface accesses texel data, which is structured memory.
-To create an empty Texture3D resource, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 3D texture interface represents texel data, which is structured memory.
-To create an empty Texture3D resource, call
Textures can't be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render-target or depth-stencil resource, call
Gets the properties of the texture resource.
-Gets the properties of the texture resource.
-A reference to a
The tracing device interface sets shader tracking information, which enables accurate logging and playback of shader execution.
-To get this interface, turn on the debug layer and use IUnknown::QueryInterface from the
Sets the reference rasterizer's default race-condition tracking options for the specified resource types.
-A
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on," otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets the reference rasterizer's race-condition tracking options for a specific shader.
-A reference to the
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
A view interface specifies the parts of a resource the pipeline can access during rendering.
-To create a view for an unordered access resource, call
All resources must be bound to the pipeline before they can be accessed. Call
Get a description of the resource.
-Get a description of the resource.
-Pointer to a resource description (see
An unordered-access-view interface represents the parts of a resource the pipeline can access during rendering.
-To create a view for an unordered access resource, call
All resources must be bound to the pipeline before they can be accessed. Call
Gets a description of the resource.
-Gets a description of the resource.
-A reference to a
The
The methods of
The
The
You must call the BeginEvent and EndEvent methods in pairs; pairs of calls to these methods can nest within pairs of calls to these methods at a higher level in the application's call stack. In other words, a "Draw World" section can entirely contain another section named "Draw Trees," which can in turn entirely contain a section called "Draw Oaks." You can only associate an EndEvent method with the most recent BeginEvent method, that is, pairs cannot overlap. You cannot call an EndEvent for any BeginEvent that preceded the most recent BeginEvent. In fact, the runtime interprets the first EndEvent as ending the second BeginEvent.
-Determines whether the calling application is running under a Microsoft Direct3D profiling tool.
-You can call GetStatus to determine whether your application is running under a Direct3D profiling tool before you make further calls to other methods of the
Marks the beginning of a section of event code.
-A
Returns the number of previous calls to BeginEvent that have not yet been finalized by calls to the
The return value is ?1 if the calling application is not running under a Direct3D profiling tool.
You call the EndEvent method to mark the end of the section of event code.
A user can visualize the event when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
BeginEvent has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Marks the end of a section of event code.
-Returns the number of previous calls to the
The return value is ?1 if the calling application is not running under a Direct3D profiling tool.
You call the BeginEvent method to mark the beginning of the section of event code.
A user can visualize the event when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
EndEvent has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Marks a single point of execution in code.
-A
A user can visualize the marker when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
SetMarker has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Determines whether the calling application is running under a Microsoft Direct3D profiling tool.
-The return value is nonzero if the calling application is running under a Direct3D profiling tool such as Visual Studio Ultimate?2012, and zero otherwise.
You can call GetStatus to determine whether your application is running under a Direct3D profiling tool before you make further calls to other methods of the
A vertex-shader interface manages an executable program (a vertex shader) that controls the vertex-shader stage.
-The vertex-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a vertex shader interface, call
This interface is defined in D3D11.h.
-The
Sets the constant buffers used by the vertex shader pipeline stage.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the VSSetConstantBuffers1 method instead.
Windows?Phone?8: This API is supported.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Bind an array of shader resources to the vertex-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (range is from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (range is from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set an array of sampler states to the vertex shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the constant buffers used by the vertex shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex shader currently set on the device.
-Address of a reference to a vertex shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler states from the vertex shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Arry of sampler-state interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
This interface provides access to several areas of Microsoft Direct3D video functionality:
In Microsoft Direct3D?9, the equivalent functions were distributed across several interfaces:
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-The graphics driver allocates the buffers that are used for DXVA decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
Gets a reference to a decoder buffer.
-A reference to the
The type of buffer to retrieve, specified as a member of the
Receives the size of the buffer, in bytes.
Receives a reference to the start of the memory buffer.
If this method succeeds, it returns
The graphics driver allocates the buffers that are used for decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
Releases a buffer that was obtained by calling the
If this method succeeds, it returns
Starts a decoding operation to decode a video frame.
-A reference to the
A reference to the
The size of the content key that is specified in pContentKey. If pContentKey is
An optional reference to a content key that was used to encrypt the frame data. If no content key was used, set this parameter to
If this method succeeds, it returns
After this method is called, call
Each call to DecoderBeginFrame must have a matching call to DecoderEndFrame. In most cases you cannot nest DecoderBeginFrame calls, but some codecs, such as like VC-1, can have nested DecoderBeginFrame calls for special operations like post processing.
The following encryption scenarios are supported through the content key:
Signals the end of a decoding operation.
-A reference to the
If this method succeeds, it returns
Submits one or more buffers for decoding.
-A reference to the
The number of buffers submitted for decoding.
A reference to an array of
If this method succeeds, it returns
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
When using feature levels 9_x, all partially encrypted buffers must use the same EncryptedBlockInfo, and partial encryption cannot be turned off on a per frame basis.
-Performs an extended function for decoding. This method enables extensions to the basic decoder functionality.
-A reference to the
A reference to a
If this method succeeds, it returns
Sets the target rectangle for the video processor.
-A reference to the
Specifies whether to apply the target rectangle.
A reference to a
The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface.
If this method is never called, or if the Enable parameter is
Sets the background color for the video processor.
-A reference to the
If TRUE, the color is specified as a YCbCr value. Otherwise, the color is specified as an RGB value.
A reference to a
The video processor uses the background color to fill areas of the target rectangle that do not contain a video image. Areas outside the target rectangle are not affected.
-Sets the output color space for the video processor.
-A reference to the
A reference to a
Sets the alpha fill mode for data that the video processor writes to the render target.
-A reference to the
The alpha fill mode, specified as a
The zero-based index of an input stream. This parameter is used if AlphaFillMode is
To find out which fill modes the device supports, call the
The default fill mode is
Sets the amount of downsampling to perform on the output.
-A reference to the
If TRUE, downsampling is enabled. Otherwise, downsampling is disabled and the Size member is ignored.
The sampling size.
Downsampling is sometimes used to reduce the quality of premium content when other forms of content protection are not available. By default, downsampling is disabled.
If the Enable parameter is TRUE, the driver downsamples the composed image to the specified size, and then scales it back to the size of the target rectangle.
The width and height of Size must be greater than zero. If the size is larger than the target rectangle, downsampling does not occur.
To use this feature, the driver must support downsampling, indicated by the
Specifies whether the video processor produces stereo video frames.
-A reference to the
If TRUE, stereo output is enabled. Otherwise, the video processor produces mono video frames.
By default, the video processor produces mono video frames.
To use this feature, the driver must support stereo video, indicated by the
Sets a driver-specific video processing state.
-A reference to the
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that contains private state data. The method passes this buffer directly to the driver without validation. It is the responsibility of the driver to validate the data.
If this method succeeds, it returns
Gets the current target rectangle for the video processor.
-A reference to the
Receives the value TRUE if the target rectangle was explicitly set using the
If Enabled receives the value TRUE, this parameter receives the target rectangle. Otherwise, this parameter is ignored.
Gets the current background color for the video processor.
-A reference to the
Receives the value TRUE if the background color is a YCbCr color, or
A reference to a
Gets the current output color space for the video processor.
-A reference to the
A reference to a
Gets the current alpha fill mode for the video processor.
-A reference to the
Receives the alpha fill mode, as a
If the alpha fill mode is
Gets the current level of downsampling that is performed by the video processor.
-A reference to the
Receives the value TRUE if downsampling was explicitly enabled using the
If Enabled receives the value TRUE, this parameter receives the downsampling size. Otherwise, this parameter is ignored.
Queries whether the video processor produces stereo video frames.
-A reference to the
Receives the value TRUE if stereo output is enabled, or
Gets private state data from the video processor.
-A reference to the
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that receives the private state data.
If this method succeeds, it returns
Specifies whether an input stream on the video processor contains interlaced or progressive frames.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A
Sets the color space for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
Sets the rate at which the video processor produces output frames for an input stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The output rate, specified as a
Specifies how the driver performs frame-rate conversion, if required.
Value | Meaning |
---|---|
| Repeat frames. |
Interpolate frames. |
?
A reference to a
The standard output rates are normal frame-rate (
Depending on the output rate, the driver might need to convert the frame rate. If so, the value of RepeatFrame controls whether the driver creates interpolated frames or simply repeats input frames.
-Sets the source rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether to apply the source rectangle.
A reference to a
The source rectangle is the portion of the input surface that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface.
If this method is never called, or if the Enable parameter is
Sets the destination rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether to apply the destination rectangle.
A reference to a
The destination rectangle is the portion of the output surface that receives the blit for this stream. The destination rectangle is given in pixel coordinates, relative to the output surface.
The default destination rectangle is an empty rectangle (0, 0, 0, 0). If this method is never called, or if the Enable parameter is
Sets the planar alpha for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether alpha blending is enabled.
The planar alpha value. The value can range from 0.0 (transparent) to 1.0 (opaque). If Enable is
To use this feature, the driver must support stereo video, indicated by the D3D11_VIDEO_PROCESSOR_FEATURE_CAPS_ALHPA_STREAM capability flag. To query for this capability, call
Alpha blending is disabled by default.
For each pixel, the destination color value is computed as follows:
Cd = Cs * (As * Ap * Ae) + Cd * (1.0 - As * Ap * Ae)
where:
Cd
= The color value of the destination pixelCs
= The color value of the source pixelAs
= The per-pixel source alphaAp
= The planar alpha valueAe
= The palette-entry alpha value, or 1.0 (see Note)The destination alpha value is computed according to the alpha fill mode. For more information, see
Sets the color-palette entries for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The number of elements in the pEntries array.
A reference to an array of palette entries. For RGB streams, the palette entries use the DXGI_FORMAT_B8G8R8A8 representation. For YCbCr streams, the palette entries use the
This method applies only to input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use the first 16 entries in the list. Formats with 8 bpp use the first 256 entries.
If a pixel has a palette index greater than the number of entries, the device treats the pixel as white with opaque alpha. For full-range RGB, this value is (255, 255, 255, 255); for YCbCr the value is (255, 235, 128, 128).
If the driver does not report the
Sets the pixel aspect ratio for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether the pSourceAspectRatio and pDestinationAspectRatio parameters contain valid values. Otherwise, the pixel aspect ratios are unspecified.
A reference to a
A reference to a
This function can only be called if the driver reports the
Pixel aspect ratios of the form 0/n and n/0 are not valid.
The default pixel aspect ratio is 1:1 (square pixels).
-Sets the luma key for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether luma keying is enabled.
The lower bound for the luma key. The valid range is [0?1]. If Enable is
The upper bound for the luma key. The valid range is [0?1]. If Enable is
To use this feature, the driver must support luma keying, indicated by the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
-Enables or disables stereo 3D video for an input stream on the video processor. In addition, this method specifies the layout of the video frames in memory.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether stereo 3D is enabled for this stream. If the value is
Specifies the layout of the two stereo views in memory, as a
If TRUE, frame 0 contains the left view. Otherwise, frame 0 contains the right view.
This parameter is ignored for the following stereo formats:
If TRUE, frame 0 contains the base view. Otherwise, frame 1 contains the base view.
This parameter is ignored for the following stereo formats:
A flag from the
For
If Format is not
Enables or disables automatic processing features on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
If TRUE, automatic processing features are enabled. If
By default, the driver might perform certain processing tasks automatically during the video processor blit. This method enables the application to disable these extra video processing features. For example, if you provide your own pixel shader for the video processor, you might want to disable the driver's automatic processing.
-Enables or disables an image filter for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The filter, specified as a
To query which filters the driver supports, call
Specifies whether to enable the filter.
The filter level. If Enable is
To find the valid range of levels for a specified filter, call
Sets a driver-specific state on a video processing stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that contains private state data. The method passes this buffer directly to the driver without validation. It is the responsibility of the driver to validate the data.
If this method succeeds, it returns
Gets the format of an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Gets the color space for an input stream of the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Gets the rate at which the video processor produces output frames for an input stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Receives a Boolean value that specifies how the driver performs frame-rate conversion, if required.
Value | Meaning |
---|---|
| Repeat frames. |
Interpolate frames. |
?
A reference to a
Gets the source rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the source rectangle is enabled, or
A reference to a
Gets the destination rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the destination rectangle is enabled, or
A reference to a
Gets the planar alpha for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if planar alpha is enabled, or
Receives the planar alpha value. The value can range from 0.0 (transparent) to 1.0 (opaque).
Gets the color-palette entries for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The number of entries in the pEntries array.
A reference to a UINT array allocated by the caller. The method fills the array with the palette entries. For RGB streams, the palette entries use the DXGI_FORMAT_B8G8R8A8 representation. For YCbCr streams, the palette entries use the
This method applies only to input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use 16 palette entries. Formats with 8 bpp use 256 entries.
-Gets the pixel aspect ratio for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the pixel aspect ratio is specified. Otherwise, receives the value
A reference to a
A reference to a
When the method returns, if *pEnabled is TRUE, the pSourceAspectRatio and pDestinationAspectRatio parameters contain the pixel aspect ratios. Otherwise, the default pixel aspect ratio is 1:1 (square pixels).
-Gets the luma key for an input stream of the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if luma keying is enabled, or
Receives the lower bound for the luma key. The valid range is [0?1].
Receives the upper bound for the luma key. The valid range is [0?1].
Gets the stereo 3D format for an input stream on the video processor
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if stereo 3D is enabled for this stream, or
Receives a
Receives a Boolean value.
Value | Meaning |
---|---|
| Frame 0 contains the left view. |
Frame 0 contains the right view. |
?
Receives a Boolean value.
Value | Meaning |
---|---|
| Frame 0 contains the base view. |
Frame 1 contains the base view. |
?
Receives a
Receives the pixel offset used for
Queries whether automatic processing features of the video processor are enabled.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if automatic processing features are enabled, or
Automatic processing refers to additional image processing that drivers might have performed on the image data prior to the application receiving the data.
-Gets the image filter settings for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The filter to query, specified as a
Receives the value TRUE if the image filter is enabled, or
Receives the filter level.
Gets a driver-specific state for a video processing stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that receives the private state data.
If this method succeeds, it returns
Performs a video processing operation on one or more input samples and writes the result to a Direct3D surface.
-A reference to the
A reference to the
The frame number of the output video frame, indexed from zero.
The number of input streams to process.
A reference to an array of
If this method succeeds, it returns
The maximum value of StreamCount is given in the MaxStreamStates member of the
If the output stereo mode is TRUE:
Otherwise:
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Establishes the session key for a cryptographic session.
-A reference to the
The size of the pData byte array, in bytes.
A reference to a byte array that contains the encrypted session key.
If this method succeeds, it returns
The key exchange mechanism depends on the type of cryptographic session.
For RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP), the software decoder generates the secret key, encrypts the secret key by using the public key with RSAES-OAEP, and places the cipher text in the pData parameter. The actual size of the buffer for RSAES-OAEP is 256 bytes.
-Reads encrypted data from a protected surface.
-A reference to the
A reference to the
A reference to the
The size of the pIV buffer, in bytes.
A reference to a buffer that receives the initialization vector (IV). The caller allocates this buffer, but the driver generates the IV.
For 128-bit AES-CTR encryption, pIV points to a
Not all drivers support this method. To query the driver capabilities, call
Some drivers might require a separate key to decrypt the data that is read back. To check for this requirement, call GetContentProtectionCaps and check for the
This method has the following limitations:
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Writes encrypted data to a protected surface.
-A reference to the
A reference to the surface that contains the source data.
A reference to the protected surface where the encrypted data is written.
A reference to a
If the driver supports partially encrypted buffers, pEncryptedBlockInfo indicates which portions of the buffer are encrypted. If the entire surface is encrypted, set this parameter to
To check whether the driver supports partially encrypted buffers, call
The size of the encrypted content key, in bytes.
A reference to a buffer that contains a content encryption key, or
If the driver supports content keys, use the content key to encrypt the surface. Encrypt the content key using the session key, and place the resulting cipher text in pContentKey. If the driver does not support content keys, use the session key to encrypt the surface and set pContentKey to
The size of the pIV buffer, in bytes.
A reference to a buffer that contains the initialization vector (IV).
For 128-bit AES-CTR encryption, pIV points to a
For other encryption types, a different structure might be used, or the encryption might not use an IV.
Not all hardware or drivers support this functionality for all cryptographic types. This function can only be called when the
This method does not support writing to sub-rectangles of the surface.
If the hardware and driver support a content key:
Otherwise, the data is encrypted by the caller using the session key and
If the driver and hardware support partially encrypted buffers, pEncryptedBlockInfo indicates which portions of the buffer are encrypted and which is not. If the entire buffer is encrypted, pEncryptedBlockinfo should be
The
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Gets a random number that can be used to refresh the session key.
-A reference to the
The size of the pRandomNumber array, in bytes. The size should match the size of the session key.
A reference to a byte array that receives a random number.
To generate a new session key, perform a bitwise XOR between the previous session key and the random number. The new session key does not take affect until the application calls
To query whether the driver supports this method, call
Switches to a new session key.
-A reference to the
This function can only be called when the
Before calling this method, call
Gets the cryptographic key to decrypt the data returned by the
If this method succeeds, it returns
This method applies only when the driver requires a separate content key for the EncryptionBlt method. For more information, see the Remarks for EncryptionBlt.
Each time this method is called, the driver generates a new key.
The KeySize should match the size of the session key.
The read back key is encrypted by the driver/hardware using the session key.
-Establishes a session key for an authenticated channel.
-A reference to the
The size of the data in the pData array, in bytes.
A reference to a byte array that contains the encrypted session key. The buffer must contain 256 bytes of data, encrypted using RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP).
If this method succeeds, it returns
This method will fail if the channel type is
Sends a query to an authenticated channel.
-A reference to the
The size of the pInput array, in bytes.
A reference to a byte array that contains input data for the query. This array always starts with a
The size of the pOutput array, in bytes.
A reference to a byte array that receives the result of the query. This array always starts with a
If this method succeeds, it returns
Sends a configuration command to an authenticated channel.
-A reference to the
The size of the pInput array, in bytes.
A reference to a byte array that contains input data for the command. This buffer always starts with a
A reference to a
If this method succeeds, it returns
Sets the stream rotation for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies if the stream is to be rotated in a clockwise orientation.
Specifies the rotation of the stream.
This is an optional state and the application should only use it if
The stream source rectangle will be specified in the pre-rotation coordinates (typically landscape) and the stream destination rectangle will be specified in the post-rotation coordinates (typically portrait). The application must update the stream destination rectangle correctly when using a rotation value other than 0? and 180?.
-Gets the stream rotation for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies if the stream is rotated.
Specifies the rotation of the stream in a clockwise orientation.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-A reference to the
The type of buffer to retrieve, specified as a member of the
The graphics driver allocates the buffers that are used for DXVA decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
Submits one or more buffers for decoding.
-A reference to the
The number of buffers submitted for decoding.
A reference to an array of
If this method succeeds, it returns
This function does not honor any D3D11 predicate that may have been set.
Allows the driver to return IHV specific information used when initializing the new hardware key.
-A reference to the
The size of the memory referenced by the pPrivateInputData parameter.
The private input data. The contents of this parameter is defined by the implementation of the secure execution environment. It may contain data about the license or about the stream properties.
A reference to the private output data. The return data is defined by the implementation of the secure execution environment. It may contain graphics-specific data to be associated with the underlying hardware key.
This method returns one of the following error codes.
The operation completed successfully. | |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
Checks the status of a crypto session.
-Specifies a
A
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
Indicates that decoder downsampling will be used and that the driver should allocate the appropriate reference frames.
-A reference to the
The color space information of the reference frame data.
The resolution, format, and colorspace of the output/display frames. This is the destination resolution and format of the downsample operation.
The number of reference frames to be used in the operation.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This function can only be called once for a specific
Updates the decoder downsampling parameters.
-A reference to the
The resolution, format, and colorspace of the output/display frames. This is the destination resolution and format of the downsample operation.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This method can only be called after decode downsampling is enabled by calling DecoderEnableDownsampling. This method is only supported if the
Sets the color space information for the video processor output surface.
-A reference to the
A
Sets a value indicating whether the output surface from a call to
Gets the color space information for the video processor output surface.
-A reference to the
A reference to a
Gets a value indicating whether the output surface from a call to
Sets the color space information for the video processor input stream.
-A reference to the
An index identifying the input stream.
A
Specifies whether the video processor input stream should be flipped vertically or horizontally.
-A reference to the
An index identifying the input stream.
True if mirroring should be enabled; otherwise, false.
True if the stream should be flipped horizontally; otherwise, false.
True if the stream should be flipped vertically; otherwise, false.
When used in combination, transformations on the processor input stream should be applied in the following order:
Gets the color space information for the video processor input stream.
-A reference to the
An index identifying the input stream.
A reference to a
Gets values that indicate whether the video processor input stream is being flipped vertically or horizontally.
-A reference to the
An index identifying the input stream.
A reference to a boolean value indicating whether mirroring is enabled. True if mirroring is enabled; otherwise, false.
A reference to a boolean value indicating whether the stream is being flipped horizontally. True if the stream is being flipped horizontally; otherwise, false.
A reference to a boolean value indicating whether the stream is being flipped vertically. True if the stream is being flipped vertically; otherwise, false.
Returns driver hints that indicate which of the video processor operations are best performed using multi-plane overlay hardware rather than
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This method computes the behavior hints using the current state of the video processor as set by the "SetOutput" and "SetStream" methods of
Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
This interface provides access to several areas of Microsoft Direct3D video functionality:
In Microsoft Direct3D?9, the equivalent functions were distributed across several interfaces:
Represents a hardware-accelerated video decoder for Microsoft Direct3D?11.
-To get a reference to this interface, call
Gets a handle to the driver.
-The driver handle can be used to configure content protection.
-Gets the parameters that were used to create the decoder.
-A reference to a
A reference to a
If this method succeeds, it returns
Gets a handle to the driver.
-Receives a handle to the driver.
If this method succeeds, it returns
The driver handle can be used to configure content protection.
-Identifies the output surfaces that can be accessed during video decoding.
-To get a reference to this interface, call
Gets the properties of the video decoder output view. -
-Gets the properties of the video decoder output view. -
-A reference to a
Provides the video decoding and video processing capabilities of a Microsoft Direct3D?11 device.
-The Direct3D?11 device supports this interface. To get a reference to this interface, call QueryInterface with an
If you query an
Gets the number of profiles that are supported by the driver.
-To enumerate the profiles, call
Creates a video decoder device for Microsoft Direct3D?11.
-A reference to a
A reference to a
Receives a reference to the
If this method succeeds, it returns
This method allocates the necessary decoder buffers.
The
Creates a video processor device for Microsoft Direct3D?11.
-A reference to the
Specifies the frame-rate conversion capabilities for the video processor. The value is a zero-based index that corresponds to the TypeIndex parameter of the
Receives a reference to the
If this method succeeds, it returns
The
Creates a channel to communicate with the Microsoft Direct3D device or the graphics driver. The channel can be used to send commands and queries for content protection.
-Specifies the type of channel, as a member of the
Receives a reference to the
If this method succeeds, it returns
If the ChannelType parameter is
If ChannelType is
Creates a cryptographic session to encrypt video content that is sent to the graphics driver.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
A reference to a
A reference to a
Value | Meaning |
---|---|
| The caller will create the session key, encrypt it with RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP) by using the driver's public key, and pass the session key to the driver. |
?
Receives a reference to the
If this method succeeds, it returns
The
Creates a resource view for a video decoder, describing the output sample for the decoding operation.
-A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVDOVView parameter to
Creates a resource view for a video processor, describing the input sample for the video processing operation.
-A reference to the
A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVPIView parameter to
The surface format is given in the FourCC member of the
Resources used for video processor input views must use the following bind flag combinations:
Creates a resource view for a video processor, describing the output sample for the video processing operation.
-A reference to the
A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVPOView parameter to
Resources used for video processor output views must use the following
If stereo output is enabled, the output view must have 2 array elements. Otherwise, it must only have a single array element.
-Enumerates the video processor capabilities of the driver.
-A reference to a
Receives a reference to the
If this method succeeds, it returns
To create the video processor device, pass the
Gets the number of profiles that are supported by the driver.
-Returns the number of profiles.
To enumerate the profiles, call
Gets a profile that is supported by the driver.
-The zero-based index of the profile. To get the number of profiles that the driver supports, call
Receives a
If this method succeeds, it returns
Given aprofile, checks whether the driver supports a specified output format.
-A reference to a
A
Receives the value TRUE if the format is supported, or
If this method succeeds, it returns
If the driver does not support the profile given in pDecoderProfile, the method returns E_INVALIDARG. If the driver supports the profile, but the DXGI format is not compatible with the profile, the method succeeds but returns the value
Gets the number of decoder configurations that the driver supports for a specified video description.
-A reference to a
Receives the number of decoder configurations.
If this method succeeds, it returns
To enumerate the decoder configurations, call
Gets a decoder configuration that is supported by the driver.
-A reference to a
The zero-based index of the decoder configuration. To get the number of configurations that the driver supports, call
A reference to a
If this method succeeds, it returns
Queries the driver for its content protection capabilities.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
If no encryption will be used, set this parameter to
A reference to a
The driver might disallow some combinations of encryption type and profile.
A reference to a
If this method succeeds, it returns
Gets a cryptographic key-exchange mechanism that is supported by the driver.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
A reference to a
The zero-based index of the key-exchange type. The driver reports the number of types in the KeyExchangeTypeCount member of the
Receives a
If this method succeeds, it returns
Sets private data on the video device and associates that data with a
The
The size of the data, in bytes.
A reference to the data.
If this method succeeds, it returns
Sets a private
If this method succeeds, it returns
Provides the video decoding and video processing capabilities of a Microsoft Direct3D?11 device.
-The Direct3D?11 device supports this interface. To get a reference to this interface, call QueryInterface with an
Retrieves optional sizes for private driver data.
-Indicates the crypto type for which the private input and output size is queried.
Indicates the decoder profile for which the private input and output size is queried.
Indicates the key exchange type for which the private input and output size is queried.
Returns the size of private data that the driver needs for input commands.
Returns the size of private data that the driver needs for output commands.
If this method succeeds, it returns
When pKeyExchangeType is D3D11_KEY_EXCHANGE_HW_PROTECTION, the following behavior is expected in the
Retrieves capabilities and limitations of the video decoder.
-The decode profile for which the capabilities are queried.
The video width for which the capabilities are queried.
The video height for which the capabilities are queried.
The frame rate of the video content. This information is used by the driver to determine whether the video can be decoded in real-time.
The bit rate of the video stream. A value of zero indicates that the bit rate can be ignored.
The type of cryptography used to encrypt the video stream. A value of
A reference to a bitwise OR combination of
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
Indicates whether the video decoder supports downsampling with the specified input format, and whether real-time downsampling is supported.
-An object describing the decoding profile, the resolution, and format of the input stream. This is the resolution and format to be downsampled.
A
The configuration data associated with the decode profile.
The frame rate of the video content. This is used by the driver to determine whether the video can be decoded in real-time.
An object describing the resolution, format, and colorspace of the output frames. This is the destination resolution and format of the downsample operation.
Pointer to a boolean value set by the driver that indicates if downsampling is supported with the specified input data. True if the driver supports the requested downsampling; otherwise, false.
Pointer to a boolean value set by the driver that indicates if real-time decoding is supported with the specified input data. True if the driver supports the requested real-time decoding; otherwise, false. Note that the returned value is based on the current configuration of the video decoder and does not guarantee that real-time decoding will be supported for future downsampling operations.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
You should call GetVideoDecoderCaps to determine whether decoder downsampling is supported before checking support for a specific configuration.
-Allows the driver to recommend optimal output downsample parameters from the input parameters.
-A
A
The configuration data associated with the decode profile.
The frame rate of the video content. This is used by the driver to determine whether the video can be decoded in real-time.
Pointer to a
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
You should call GetVideoDecoderCaps to determine whether decoder downsampling is supported before checking support for a specific configuration.
-Represents a video processor for Microsoft Direct3D?11.
-To get a reference to this interface, call
Gets the content description that was used to create the video processor.
-Gets the rate conversion capabilities of the video processor.
-Gets the content description that was used to create the video processor.
-A reference to a
Gets the rate conversion capabilities of the video processor.
-A reference to a
Gets the content description that was used to create this enumerator.
-Gets the content description that was used to create this enumerator.
-Gets the capabilities of the video processor.
-Gets the content description that was used to create this enumerator.
-A reference to a
If this method succeeds, it returns
Queries whether the video processor supports a specified video format.
-The video format to query, specified as a
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets the capabilities of the video processor.
-A reference to a
If this method succeeds, it returns
Returns a group of video processor capabilities that are associated with frame-rate conversion, including deinterlacing and inverse telecine.
-The zero-based index of the group to retrieve. To get the maximum index, call
A reference to a
If this method succeeds, it returns
The capabilities defined in the
Gets a list of custom frame rates that a video processor supports.
-The zero-based index of the frame-rate capability group. To get the maxmum index, call
The zero-based index of the custom rate to retrieve. To get the maximum index, call
This index value is always relative to the capability group specified in the TypeIndex parameter.
A reference to a
If this method succeeds, it returns
Gets the range of values for an image filter.
-The type of image filter, specified as a
A reference to a
If this method succeeds, it returns
Enumerates the video processor capabilities of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call
Indicates whether the driver supports the specified combination of format and colorspace conversions.
-The format of the video processor input.
The colorspace of the video processor input.
The format of the video processor output.
The colorspace of the video processor output.
Pointer to a boolean that is set by the driver to indicate if the specified combination of format and colorspace conversions is supported. True if the conversion is supported; otherwise, false.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
Identifies the input surfaces that can be accessed during video processing.
-To get a reference to this interface, call
Gets the properties of the video processor input view.
-Gets the properties of the video processor input view.
-A reference to a
Identifies the output surfaces that can be accessed during video processing.
-To get a reference to this interface, call
Gets the properties of the video processor output view.
-Gets the properties of the video processor output view.
-A reference to a
Contains an initialization vector (IV) for 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher encryption.
-The IV, in big-endian format.
The block count, in big-endian format.
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_ENCRYPTION_WHEN_ACCESSIBLE command.
-A
A
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_CRYPTO_SESSION command.
-A
A handle to the decoder device. Get this from
A handle to the cryptographic session. Get this from
A handle to the Direct3D device. Get this from D3D11VideoContext::QueryAuthenticatedChannel using D3D11_AUTHENTICATED_QUERY_DEVICE_HANDLE. -
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_INITIALIZE command.
-A
The initial sequence number for queries.
The initial sequence number for commands.
Contains input data for the
Contains the response from the
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_PROTECTION command.
-A
A
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_SHARED_RESOURCE command.
-A
A
A process handle. If the ProcessType member equals
If TRUE, the specified process has access to restricted shared resources.
Specifies the protection level for video content.
-If 1, video content protection is enabled.
If 1, the application requires video to be displayed using either a hardware overlay or full-screen exclusive mode.
Reserved. Set all bits to zero.
Use this member to access all of the bits in the union.
Contains the response to a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID_COUNT query.
-A
The number of encryption GUIDs.
Contains input data for a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID query.
-A
The index of the encryption
Contains the response to a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID query.
-A
The index of the encryption
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_CHANNEL_TYPE query.
-A
A
Contains input data for a D3D11_AUTHENTICATED_QUERY_CRYPTO_SESSION query.
-A
A handle to a decoder device.
Contains the response to a D3D11_AUTHENTICATED_QUERY_CRYPTO_SESSION query.
-A
A handle to a decoder device.
A handle to the cryptographic session that is associated with the decoder device.
A handle to the Direct3D device that is associated with the decoder device.
Contains the response to a D3D11_AUTHENTICATED_QUERY_CURRENT_ENCRYPTION_WHEN_ACCESSIBLE query.
-A
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_DEVICE_HANDLE query.
-A
A handle to the device.
Contains input data for the
Contains a response from the
Contains input data for a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID_COUNT query.
-A
A handle to the device.
A handle to the cryptographic session.
Contains the response to a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID_COUNT query.
-A
A handle to the device.
A handle to the cryptographic session.
The number of output IDs associated with the specified device and cryptographic session.
Contains input data for a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID query.
-A
A handle to the device.
A handle to the cryptographic session.
The index of the output ID.
Contains the response to a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID query.
-A
A handle to the device.
A handle to the cryptographic session.
The index of the output ID.
An output ID that is associated with the specified device and cryptographic session.
Contains the response to a D3D11_AUTHENTICATED_QUERY_PROTECTION query.
-A
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS_COUNT query.
-A
The number of processes that are allowed to open shared resources that have restricted access. A process cannot open such a resource unless the process has been granted access.
Contains input data for a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS query.
-A
The index of the process.
Contains the response to a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS query.
-The Desktop Window Manager (DWM) process is identified by setting ProcessIdentifier equal to
A
The index of the process in the list of processes.
A
A process handle. If the ProcessIdentifier member equals
Contains the response to a D3D11_AUTHENTICATED_QUERY_UNRESTRICTED_PROTECTED_SHARED_RESOURCE_COUNT query.
-A
The number of protected, shared resources that can be opened by any process without restrictions.
Describes an HLSL class instance.
-The
The members of this structure except InstanceIndex are valid (non default values) if they describe a class instance aquired using
The instance ID of an HLSL class; the default value is 0.
The instance index of an HLSL class; the default value is 0.
The type ID of an HLSL class; the default value is 0.
Describes the constant buffer associated with an HLSL class; the default value is 0.
The base constant buffer offset associated with an HLSL class; the default value is 0.
The base texture associated with an HLSL class; the default value is 127.
The base sampler associated with an HLSL class; the default value is 15.
True if the class was created; the default value is false.
Information about the video card's performance counter capabilities.
-This structure is returned by
Largest device-dependent counter ID that the device supports. If none are supported, this value will be 0. Otherwise it will be greater than or equal to
Number of counters that can be simultaneously supported.
Number of detectable parallel units that the counter is able to discern. Values are 1 ~ 4. Use NumDetectableParallelUnits to interpret the values of the VERTEX_PROCESSING, GEOMETRY_PROCESSING, PIXEL_PROCESSING, and OTHER_GPU_PROCESSING counters.
Describes a counter.
-This structure is used by
Type of counter (see
Reserved.
Used with
Use this structure with CreateWrappedResource.
-Stencil operations that can be performed based on the results of stencil test.
-All stencil operations are specified as a
This structure is a member of a depth-stencil description.
-The stencil operation to perform when stencil testing fails.
The stencil operation to perform when stencil testing passes and depth testing fails.
The stencil operation to perform when stencil testing and depth testing both pass.
A function that compares stencil data against existing stencil data. The function options are listed in
Specifies the subresources of a texture that are accessible from a depth-stencil view.
-These are valid formats for a depth-stencil view:
A depth-stencil view cannot use a typeless format. If the format chosen is
A depth-stencil-view description is needed when calling
Specifies the subresource from a 1D texture that is accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
Specifies the subresources from an array of 1D textures to use in a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a 2D texture that is accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
Specifies the subresources from an array 2D textures that are accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a multisampled 2D texture that is accessible to a depth-stencil view.
-Because a multisampled 2D texture contains a single subtexture, there is nothing to specify; this unused member is included so that this structure will compile in C.
-Unused.
Specifies the subresources from an array of multisampled 2D textures for a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
Resource data format (see
Type of resource (see
A value that describes whether the texture is read only. Pass 0 to specify that it is not read only; otherwise, pass one of the members of the
Specifies a 1D texture subresource (see
Specifies an array of 1D texture subresources (see
Specifies a 2D texture subresource (see
Specifies an array of 2D texture subresources (see
Specifies a multisampled 2D texture (see
Specifies an array of multisampled 2D textures (see
Arguments for draw indexed instanced indirect.
- The members of this structure serve the same purpose as the parameters of
The number of indices read from the index buffer for each instance.
The number of instances to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A value added to each index before reading per-instance data from a vertex buffer.
Arguments for draw instanced indirect.
- The members of this structure serve the same purpose as the parameters of
The number of vertices to draw.
The number of instances to draw.
The index of the first vertex.
A value added to each index before reading per-instance data from a vertex buffer.
Specifies which bytes in a video surface are encrypted.
-The number of bytes that are encrypted at the start of the buffer.
The number of bytes that are skipped after the first NumEncryptedBytesAtBeginning bytes, and then after each block of NumBytesInEncryptPattern bytes. Skipped bytes are not encrypted.
The number of bytes that are encrypted after each block of skipped bytes.
Describes information about Direct3D 11.1 adapter architecture.
-Specifies whether a rendering device batches rendering commands and performs multipass rendering into tiles or bins over a render area. Certain API usage patterns that are fine for TileBasedDefferredRenderers (TBDRs) can perform worse on non-TBDRs and vice versa. Applications that are careful about rendering can be friendly to both TBDR and non-TBDR architectures. TRUE if the rendering device batches rendering commands and
Describes compute shader and raw and structured buffer support in the current graphics driver.
-Direct3D 11 devices (
TRUE if compute shaders and raw and structured buffers are supported; otherwise
Describes Direct3D 11.1 feature options in the current graphics driver.
-If a Microsoft Direct3D device supports feature level 11.1 (
Feature level 11.1 provides the following additional features:
The runtime always sets the following groupings of members identically. That is, all the values in a grouping are TRUE or
Specifies whether logic operations are available in blend state. The runtime sets this member to TRUE if logic operations are available in blend state and
Specifies whether the driver can render with no render target views (RTVs) or depth stencil views (DSVs), and only unordered access views (UAVs) bound. The runtime sets this member to TRUE if the driver can render with no RTVs or DSVs and only UAVs bound and
Specifies whether the driver supports the
Specifies whether the driver supports new semantics for copy and update that are exposed by the
Specifies whether the driver supports the
Specifies whether you can call
Specifies whether the driver supports partial updates of constant buffers. The runtime sets this member to TRUE if the driver supports partial updates of constant buffers and
Specifies whether the driver supports new semantics for setting offsets in constant buffers for a shader. The runtime sets this member to TRUE if the driver supports allowing you to specify offsets when you call new methods like the
Specifies whether you can call
Specifies whether you can call
Specifies whether the driver supports multisample rendering when you render with RTVs bound. If TRUE, you can set the ForcedSampleCount member of
Specifies whether the hardware and driver support the msad4 intrinsic function in shaders. The runtime sets this member to TRUE if the hardware and driver support calls to msad4 intrinsic functions in shaders. If
Specifies whether the hardware and driver support the fma intrinsic function and other extended doubles instructions (DDIV and DRCP) in shaders. The fma intrinsic function emits an extended doubles DFMA instruction. The runtime sets this member to TRUE if the hardware and driver support extended doubles instructions in shaders (shader model 5 and higher). Support of this option implies support of basic double-precision shader instructions as well. You can use the
Specifies whether the hardware and driver support sharing a greater variety of Texture2D resource types and formats. The runtime sets this member to TRUE if the hardware and driver support extended Texture2D resource sharing.
Describes Direct3D 11.2 feature options in the current graphics driver.
- If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Specifies whether the hardware and driver support tiled resources. The runtime sets this member to a
Specifies whether the hardware and driver support the filtering options (
Specifies whether the hardware and driver also support the
Specifies support for creating
Describes Direct3D 11.3 feature options in the current graphics driver.
-Whether to use the VP and RT array index from any shader feeding the rasterizer.
Describes Direct3D 11.4 feature options in the current graphics driver.
-Use this structure with the
Refer to the section on NV12 in Direct3D 11.4 Features.
-Specifies a
Describes Direct3D 9 feature options in the current graphics driver.
-Specifies whether the driver supports the nonpowers-of-2-unconditionally feature. For more information about this feature, see feature level. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to
Describes Direct3D 9 feature options in the current graphics driver.
-You can use the
Specifies whether the driver supports the nonpowers-of-2-unconditionally feature. For more info about this feature, see feature level. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to
Specifies whether the driver supports the shadowing feature with the comparison-filtering mode set to less than or equal to. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to TRUE only if the hardware and driver support the shadowing feature; otherwise
Specifies whether the hardware and driver support simple instancing. The runtime sets this member to TRUE if the hardware and driver support simple instancing.
Specifies whether the hardware and driver support setting a single face of a TextureCube as a render target while the depth stencil surface that is bound alongside can be a Texture2D (as opposed to TextureCube). The runtime sets this member to TRUE if the hardware and driver support this feature; otherwise
If the hardware and driver don't support this feature, the app must match the render target surface type with the depth stencil surface type. Because hardware at Direct3D 9.3 and lower feature levels doesn't allow TextureCube depth surfaces, the only way to render a scene into a TextureCube while having depth buffering enabled is to render each TextureCube face separately to a Texture2D render target first (because that can be matched with a Texture2D depth), and then copy the results into the TextureCube. If the hardware and driver support this feature, the app can just render to the TextureCube faces directly while getting depth buffering out of a Texture2D depth buffer.
You only need to query this feature from hardware at Direct3D 9.3 and lower feature levels because hardware at Direct3D 10.0 and higher feature levels allow TextureCube depth surfaces.
Describes Direct3D?9 shadow support in the current graphics driver.
-Shadows are an important element in realistic 3D scenes. You can use the shadow buffer technique to render shadows. The basic principle of the technique is to use a depth buffer to store the scene depth info from the perspective of the light source, and then compare each point rendered in the scene with that buffer to determine if it is in shadow.
To render objects into the scene with shadows on them, you create sampler state objects with comparison filtering set and the comparison mode (ComparisonFunc) to LessEqual. You can also set BorderColor addressing on this depth sampler, even though BorderColor isn't typically allowed on feature levels 9.1 and 9.2. By using the border color and picking 0.0 or 1.0 as the border color value, you can control whether the regions off the edge of the shadow map appear to be always in shadow or never in shadow respectively. - You can control the shadow filter quality by the Mag and Min filter settings in the comparison sampler. Point sampling will produce shadows with non-anti-aliased edges. Linear filter sampler settings will result in higher quality shadow edges, but might affect performance on some power-optimized devices.
Note??If you use a separate setting for Mag versus Min filter options, you produce an undefined result. Anisotropic filtering is not supported. The Mip filter choice is not relevant because feature level 9.x does not allow mipmapped depth buffers.?Note??On feature level 9.x, you can't compile a shader with the SampleCmp and SampleCmpLevelZero intrinsic functions by using older versions of the compiler. For example, you can't use the fxc.exe compiler that ships with the DirectX SDK or use theSpecifies whether the driver supports the shadowing feature with the comparison-filtering mode set to less than or equal to. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to TRUE only if the hardware and driver support the shadowing feature; otherwise
Describes whether simple instancing is supported.
- If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Simple instancing means that instancing is supported with the caveat that the InstanceDataStepRate member of the
Specifies whether the hardware and driver support simple instancing. The runtime sets this member to TRUE if the hardware and driver support simple instancing.
Describes double data type support in the current graphics driver.
-If the runtime sets DoublePrecisionFloatShaderOps to TRUE, the hardware and driver support the following Shader Model 5 instructions:
Specifies whether double types are allowed. If TRUE, double types are allowed; otherwise
Describes which resources are supported by the current graphics driver for a given format.
-
Combination of
Describes which unordered resource options are supported by the current graphics driver for a given format.
-
Combination of
Describes feature data GPU virtual address support, including maximum address bits per resource and per process.
- See
The maximum GPU virtual address bits per resource.
The maximum GPU virtual address bits per process.
Describes whether a GPU profiling technique is supported.
-If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Specifies whether the hardware and driver support a GPU profiling technique that can be used with development tools. The runtime sets this member to TRUE if the hardware and driver support data marking.
Stencil operations that can be performed based on the results of stencil test.
-All stencil operations are specified as a
This structure is a member of a depth-stencil description.
-The stencil operation to perform when stencil testing fails.
Describes precision support options for shaders in the current graphics driver.
-For hardware at Direct3D 10 and higher feature levels, the runtime sets both members identically. For hardware at Direct3D 9.3 and lower feature levels, the runtime can set a lower precision support in the PixelShaderMinPrecision member than the AllOtherShaderStagesMinPrecision member; for 9.3 and lower, all other shader stages represent only the vertex shader.
For more info about HLSL minimum precision, see using HLSL minimum precision.
-A combination of
A combination of
Describes the multi-threading features that are supported by the current graphics driver.
-Use the
TRUE means resources can be created concurrently on multiple threads while drawing;
TRUE means command lists are supported by the current driver;
Allow or deny certain types of messages to pass through a filter.
-Number of message categories to allow or deny.
Array of message categories to allow or deny. Array must have at least NumCategories members (see
Allow or deny certain types of messages to pass through a filter.
-Number of message categories to allow or deny.
Array of message categories to allow or deny. Array must have at least NumCategories members (see
Number of message severity levels to allow or deny.
Array of message severity levels to allow or deny. Array must have at least NumSeverities members (see
Number of message IDs to allow or deny.
Array of message IDs to allow or deny. Array must have at least NumIDs members (see
A description of a single element for the input-assembler stage.
-An input-layout object contains an array of structures, each structure defines one element being read from an input slot. Create an input-layout object by calling
The HLSL semantic associated with this element in a shader input-signature.
The semantic index for the element. A semantic index modifies a semantic, with an integer index number. A semantic index is only needed in a case where there is more than one element with the same semantic. For example, a 4x4 matrix would have four components each with the semantic name
matrix
, however each of the four component would have different semantic indices (0, 1, 2, and 3).
The data type of the element data. See
An integer value that identifies the input-assembler (see input slot). Valid values are between 0 and 15, defined in D3D11.h.
Optional. Offset (in bytes) between each element. Use D3D11_APPEND_ALIGNED_ELEMENT for convenience to define the current element directly after the previous one, including any packing if necessary.
Identifies the input data class for a single input slot (see
The number of instances to draw using the same per-instance data before advancing in the buffer by one element. This value must be 0 for an element that contains per-vertex data (the slot class is set to
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange data for hardware content protection.
-A reference to this structure is passed in the pData parameter of
The function ID of the DRM command. The values and meanings of the function ID are defined by the DRM specification.
Pointer to a buffer containing a
Pointer to a buffer containing a
The result of the hardware DRM command.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange input data for hardware content protection.
-The size of the private data reserved for IHV usage. This size is determined from the pPrivateInputSize parameter returned by the
The size of the DRM command data.
If PrivateDataSize is greater than 0, pbInput[0] ? pbInput[PrivateDataSize - 1] is reserved for IHV use.
pbInput[PrivateDataSize] ? pbInput[HWProtectionDataSize + PrivateDataSize - 1] contains the input data for the DRM command. The format and size of the DRM command is defined by the DRM specification.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange output data for hardware content protection.
-The size of the private data reserved for IHV usage. This size is determined from the pPrivateOutputSize parameter returned by the
The maximum size of data that the driver can return in the output buffer. The last byte that it can write to is pbOuput[PrivateDataSize + MaxHWProtectionDataSize ? 1].
The size of the output data written by the driver.
The number of 100 nanosecond units spent transporting the data.
The number of 100 nanosecond units spent executing the content protection command.
If PrivateDataSize is greater than 0, pbInput[0] ? pbOutput[PrivateDataSize - 1] is reserved for IHV use.
pbOutput[PrivateDataSize] ? pbOutput[HWProtectionDataSize + PrivateDataSize - 1] contains the input data for the DRM command. The format and size of the DRM command is defined by the DRM specification.
A debug message in the Information Queue.
-This structure is returned from
The category of the message. See
The severity of the message. See
The ID of the message. See
The message string.
The length of pDescription in bytes.
Contains a Message Authentication Code (MAC).
-A byte array that contains the cryptographic MAC value of the message.
Describes the tile structure of a tiled resource with mipmaps.
-Number of standard mipmaps in the tiled resource.
Number of packed mipmaps in the tiled resource.
This number starts from the least detailed mipmap (either sharing tiles or using non standard tile layout). This number is 0 if no - such packing is in the resource. For array surfaces, this value is the number of mipmaps that are packed for a given array slice where each array slice repeats the same - packing. -
On Tier_2 tiled resources hardware, mipmaps that fill at least one standard shaped tile in all dimensions - are not allowed to be included in the set of packed mipmaps. On Tier_1 hardware, mipmaps that are an integer multiple of one standard shaped tile in all dimensions are not allowed to be included in the set of packed mipmaps. Mipmaps with at least one - dimension less than the standard tile shape may or may not be packed. When a given mipmap needs to be packed, all coarser - mipmaps for a given array slice are considered packed as well. -
Number of tiles for the packed mipmaps in the tiled resource.
If there is no packing, this value is meaningless and is set to 0. - Otherwise, it is set to the number of tiles - that are needed to represent the set of packed mipmaps. - The pixel layout within the packed mipmaps is hardware specific. - If apps define only partial mappings for the set of tiles in packed mipmaps, read and write behavior is vendor specific and undefined. - For arrays, this value is only the count of packed mipmaps within - the subresources for each array slice.
Offset of the first packed tile for the resource - in the overall range of tiles. If NumPackedMips is 0, this - value is meaningless and is 0. Otherwise, it is the - offset of the first packed tile for the resource in the overall - range of tiles for the resource. A value of 0 for - StartTileIndexInOverallResource means the entire resource is packed. - For array surfaces, this is the offset for the tiles that contain the packed - mipmaps for the first array slice. Packed mipmaps for each array slice in arrayed surfaces are at this offset - past the beginning of the tiles for each array slice.
Note??The - number of overall tiles, packed or not, for a given array slice is - simply the total number of tiles for the resource divided by the - resource's array size, so it is easy to locate the range of tiles for - any given array slice, out of which StartTileIndexInOverallResource identifies - which of those are packed. ?Query information about graphics-pipeline activity in between calls to
Query information about the reliability of a timestamp query.
-For a list of query types see
How frequently the GPU counter increments in Hz.
If this is TRUE, something occurred in between the query's
Describes a query.
-Type of query (see
Miscellaneous flags (see
Describes a query.
-A
A combination of
A
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-Determines the fill mode to use when rendering (see
Indicates triangles facing the specified direction are not drawn (see
Determines if a triangle is front- or back-facing. If this parameter is TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If this parameter is
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable | |
ForcedSampleCount | 0 |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-Determines the fill mode to use when rendering.
Indicates that triangles facing the specified direction are not drawn.
Specifies whether a triangle is front- or back-facing. If TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Specifies whether to enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Specifies whether to enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
The sample count that is forced while UAV rendering or rasterizing. Valid values are 0, 1, 2, 4, 8, and optionally 16. 0 indicates that the sample count is not forced.
Note??If you want to render with ForcedSampleCount set to 1 or greater, you must follow these guidelines:
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable | |
ForcedSampleCount | 0 |
ConservativeRaster |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-A
A
Specifies whether a triangle is front- or back-facing. If TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Specifies whether to enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Specifies whether to enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
The sample count that is forced while UAV rendering or rasterizing. Valid values are 0, 1, 2, 4, 8, and optionally 16. 0 indicates that the sample count is not forced.
Note??If you want to render with ForcedSampleCount set to 1 or greater, you must follow these guidelines:
A
Describes the blend state for a render target.
-You specify an array of
For info about how blending is done, see the output-merger stage.
Here are the default values for blend state.
State | Default Value |
---|---|
BlendEnable | |
SrcBlend | |
DestBlend | |
BlendOp | |
SrcBlendAlpha | |
DestBlendAlpha | |
BlendOpAlpha | |
RenderTargetWriteMask |
?
-Enable (or disable) blending.
This blend option specifies the operation to perform on the RGB value that the pixel shader outputs. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the current RGB value in the render target. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend operation defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the alpha value that the pixel shader outputs. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend option specifies the operation to perform on the current alpha value in the render target. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend operation defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
A write mask.
Describes the blend state for a render target.
-You specify an array of
For info about how blending is done, see the output-merger stage.
Here are the default values for blend state.
State | Default Value |
---|---|
BlendEnable | |
LogicOpEnable | |
SrcBlend | |
DestBlend | |
BlendOp | |
SrcBlendAlpha | |
DestBlendAlpha | |
BlendOpAlpha | |
LogicOp | |
RenderTargetWriteMask |
?
-Enable (or disable) blending.
Enable (or disable) a logical operation.
This blend option specifies the operation to perform on the RGB value that the pixel shader outputs. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the current RGB value in the render target. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend operation defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the alpha value that the pixel shader outputs. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend option specifies the operation to perform on the current alpha value in the render target. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend operation defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
A
A write mask.
Specifies the subresources from a resource that are accessible using a render-target view.
-A render-target-view description is passed into
A render-target-view cannot use the following formats:
If the format is set to
Specifies the elements in a buffer resource to use in a render-target view.
- A render-target view is a member of a render-target-view description (see
Number of bytes between the beginning of the buffer and the first element to access.
The offset of the first element in the view to access, relative to element 0.
The total number of elements in the view.
The width of each element (in bytes). This can be determined from the format stored in the render-target-view description.
Specifies the subresource from a 1D texture to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
Specifies the subresources from an array of 1D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a 2D texture to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
Specifies the subresource from a multisampled 2D texture to use in a render-target view.
-Since a multisampled 2D texture contains a single subresource, there is actually nothing to specify in
Integer of any value. See remarks.
Specifies the subresources from an array of 2D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures in the array to use in the render target view, starting from FirstArraySlice.
Specifies the subresources from a an array of multisampled 2D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresources from a 3D texture to use in a render-target view.
-This structure is one member of a render target view. See
The index of the mipmap level to use mip slice.
First depth level to use.
Number of depth levels to use in the render-target view, starting from FirstWSlice. A value of -1 indicates all of the slices along the w axis, starting from FirstWSlice.
The data format (see
The resource type (see
Specifies which buffer elements can be accessed (see
Specifies the subresources in a 1D texture that can be accessed (see
Specifies the subresources in a 1D texture array that can be accessed (see
Specifies the subresources in a 2D texture that can be accessed (see
Specifies the subresources in a 2D texture array that can be accessed (see
Specifies a single subresource because a multisampled 2D texture only contains one subresource (see
Specifies the subresources in a multisampled 2D texture array that can be accessed (see
Specifies subresources in a 3D texture that can be accessed (see
Describes the subresources from a resource that are accessible using a render-target view.
-A render-target-view description is passed into
A render-target-view can't use the following formats:
If the format is set to
Describes the subresource from a 2D texture to use in a render-target view.
-The index of the mipmap level to use mip slice.
The index (plane slice number) of the plane to use in the texture.
Describes the subresources from an array of 2D textures to use in a render-target view.
-The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures in the array to use in the render-target view, starting from FirstArraySlice.
The index (plane slice number) of the plane to use in an array of textures.
A
A
A
A
A
A
A
A
A
A
Defines a 3D box.
-The following diagram shows a 3D box, where the origin is the left, front, top corner.
The values for right, bottom, and back are each one pixel past the end of the pixels that are included in the box region. That is, the values for left, top, and front are included in the box region while the values for right, bottom, and back are excluded from the box region. For example, for a box that is one pixel wide, (right - left) == 1; the box region includes the left pixel but not the right pixel.
Coordinates of a box are in bytes for buffers and in texels for textures.
-The x position of the left hand side of the box.
The y position of the top of the box.
The z position of the front of the box.
The x position of the right hand side of the box.
The y position of the bottom of the box.
The z position of the back of the box.
Describes a sampler state.
-These are the default values for sampler state.
State | Default Value |
---|---|
Filter | |
AddressU | |
AddressV | |
AddressW | |
MinLOD | -3.402823466e+38F (-FLT_MAX) |
MaxLOD | 3.402823466e+38F (FLT_MAX) |
MipMapLODBias | 0.0f |
MaxAnisotropy | 1 |
ComparisonFunc | |
BorderColor | float4(1.0f,1.0f,1.0f,1.0f) |
Texture | N/A |
?
- Filtering method to use when sampling a texture (see
Method to use for resolving a u texture coordinate that is outside the 0 to 1 range (see
Method to use for resolving a v texture coordinate that is outside the 0 to 1 range.
Method to use for resolving a w texture coordinate that is outside the 0 to 1 range.
Offset from the calculated mipmap level. For example, if Direct3D calculates that a texture should be sampled at mipmap level 3 and MipLODBias is 2, then the texture will be sampled at mipmap level 5.
Clamping value used if
A function that compares sampled data against existing sampled data. The function options are listed in
Border color to use if
Lower end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed.
Upper end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed. This value must be greater than or equal to MinLOD. To have no upper limit on LOD set this to a large value such as D3D11_FLOAT32_MAX.
Describes a shader-resource view.
-A view is a format-specific way to look at the data in a resource. The view determines what data to look at, and how it is cast when read.
When viewing a resource, the resource-view description must specify a typed format, that is compatible with the resource format. So that means that you cannot create a resource-view description using any format with _TYPELESS in the name. You can however view a typeless resource by specifying a typed format for the view. For example, a
Create a shader-resource-view description by calling
Specifies the elements in a buffer resource to use in a shader-resource view.
- The
Number of bytes between the beginning of the buffer and the first element to access.
The offset of the first element in the view to access, relative to element 0.
The total number of elements in the view.
The width of each element (in bytes). This can be determined from the format stored in the shader-resource-view description.
Describes the elements in a raw buffer resource to use in a shader-resource view.
-This structure is used by
The index of the first element to be accessed by the view.
The number of elements in the resource.
A
Specifies the subresource from a 1D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
As an example, assuming MostDetailedMip = 6 and MipLevels = 2, the view will have access to 2 mipmap levels, 6 and 7, of the original texture for which
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture1D for which
The maximum number of mipmap levels for the view of the texture. See the remarks.
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of 1D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture1D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
Specifies the subresource from a 2D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of 2D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
Specifies the subresources from a 3D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture3D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresource from a cube texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original TextureCube for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of cube textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original TextureCube for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Index of the first 2D texture to use.
Number of cube textures in the array.
Specifies the subresources from a multisampled 2D texture to use in a shader-resource view.
-Since a multisampled 2D texture contains a single subresource, there is actually nothing to specify in
Integer of any value. See remarks.
Specifies the subresources from an array of multisampled 2D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
A
The resource type of the view. See D3D11_SRV_DIMENSION. This should be the same as the resource type of the underlying resource. This parameter also determines which _SRV to use in the union below.
View the resource as a buffer using information from a shader-resource view (see
View the resource as a 1D texture using information from a shader-resource view (see
View the resource as a 1D-texture array using information from a shader-resource view (see
View the resource as a 2D-texture using information from a shader-resource view (see
View the resource as a 2D-texture array using information from a shader-resource view (see
View the resource as a 2D-multisampled texture using information from a shader-resource view (see
View the resource as a 2D-multisampled-texture array using information from a shader-resource view (see
View the resource as a 3D texture using information from a shader-resource view (see
View the resource as a 3D-cube texture using information from a shader-resource view (see
View the resource as a 3D-cube-texture array using information from a shader-resource view (see
View the resource as a raw buffer using information from a shader-resource view (see
Describes a shader-resource view.
-A view is a format-specific way to look at the data in a resource. The view determines what data to look at, and how it is cast when read.
When viewing a resource, the resource-view description must specify a typed format, that is compatible with the resource format. So that means that you cannot create a resource-view description using any format with _TYPELESS in the name. You can however view a typeless resource by specifying a typed format for the view. For example, a
Create a shader-resource-view description by calling
Describes the subresource from a 2D texture to use in a shader-resource view.
- Index of the most detailed mipmap level to use; this number is between 0 and (MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index (plane slice number) of the plane to use in the texture.
Describes the subresources from an array of 2D textures to use in a shader-resource view.
- Index of the most detailed mipmap level to use; this number is between 0 and ( MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
The index (plane slice number) of the plane to use in an array of textures.
A
A D3D11_SRV_DIMENSION-typed value that specifies the resource type of the view. This type is the same as the resource type of the underlying resource. This member also determines which _SRV to use in the union below.
A
A
A
A
A
A
A
A
A
A
A
Description of a vertex element in a vertex buffer in an output slot.
-Zero-based, stream number.
Type of output element; possible values include: "POSITION", "NORMAL", or "TEXCOORD0". Note that if SemanticName is
Output element's zero-based index. Should be used if, for example, you have more than one texture coordinate stored in each vertex.
Which component of the entry to begin writing out to. Valid values are 0 to 3. For example, if you only wish to output to the y and z components of a position, then StartComponent should be 1 and ComponentCount should be 2.
The number of components of the entry to write out to. Valid values are 1 to 4. For example, if you only wish to output to the y and z components of a position, then StartComponent should be 1 and ComponentCount should be 2. Note that if SemanticName is
The associated stream output buffer that is bound to the pipeline (see
Query information about the amount of data streamed out to the stream-output buffers in between
Describes a tiled subresource volume.
-Each packed mipmap is individually reported as 0 for WidthInTiles, HeightInTiles and DepthInTiles. -
The total number of tiles in subresources is WidthInTiles*HeightInTiles*DepthInTiles.
-The width in tiles of the subresource.
The height in tiles of the subresource.
The depth in tiles of the subresource.
The index of the tile in the overall tiled subresource to start with.
GetResourceTiling sets StartTileIndexInOverallResource to D3D11_PACKED_TILE (0xffffffff) to indicate that the whole
-
Describes a 1D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE1D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the array. The range is from 1 to
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Identifies a texture resource for a video processor output view.
-The zero-based index into the array of subtextures.
The index of the first texture to use.
The number of textures in the array.
Describes a 2D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE2D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The device places some size restrictions (must be multiples of a minimum size) for a subsampled, block compressed, or bit-format resource.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the texture array. The range is from 1 to
Texture format (see
Structure that specifies multisampling parameters for the texture. See
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Describes a 2D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE2D_DESC1 derived structure, which is defined in D3D11_3.h and behaves like an inherited class, to help create a texture description.
The device places some size restrictions (must be multiples of a minimum size) for a subsampled, block compressed, or bit-format resource.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the texture array. The range is from 1 to
Texture format (see
Structure that specifies multisampling parameters for the texture. See
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
A
The TextureLayout parameter selects both the actual layout of the texture in memory and the layout visible to the application while the texture is mapped. These flags may not be requested without CPU access also requested.
It is illegal to set CPU access flags on default textures without also setting TextureLayout to a value other than
Identifies the texture resource for a video decoder output view.
-The zero-based index of the texture.
Identifies the texture resource for a video processor input view.
-The zero-based index into the array of subtextures.
The zero-based index of the texture.
Identifies a texture resource for a video processor output view.
-The zero-based index into the array of subtextures.
Describes a 3D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE3D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The device restricts the size of subsampled, block compressed, and bit format resources to be multiples of sizes specific to each format.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
Texture depth (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Describes a 3D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE3D_DESC1 derived structure, which is defined in D3D11_3.h and behaves like an inherited class, to help create a texture description.
The device restricts the size of subsampled, block compressed, and bit format resources to be multiples of sizes specific to each format.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
Texture depth (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
A
The TextureLayout parameter selects both the actual layout of the texture in memory and the layout visible to the application while the texture is mapped. These flags may not be requested without CPU access also requested.
It is illegal to set CPU access flags on default textures without also setting Layout to a value other than
Describes the coordinates of a tiled resource.
-The x position of a tiled resource. Used for buffer and 1D, 2D, and 3D textures.
The y position of a tiled resource. Used for 2D and 3D textures.
The z position of a tiled resource. Used for 3D textures.
A subresource index value into mipmaps and arrays. Used for 1D, 2D, and 3D textures.
For mipmaps that use nonstandard tiling, or are packed, or both use nonstandard tiling and are packed, any subresource value that indicates any of the packed mipmaps all refer to the same tile.
Describes the size of a tiled region.
-The number of tiles in the tiled region.
Specifies whether the runtime uses the Width, Height, and Depth members to define the region.
If TRUE, the runtime uses the Width, Height, and Depth members to define the region.
If
Regardless of whether you specify TRUE or
When the region includes mipmaps that are packed with nonstandard tiling, bUseBox must be
The width of the tiled region, in tiles. Used for buffer and 1D, 2D, and 3D textures.
The height of the tiled region, in tiles. Used for 2D and 3D textures.
The depth of the tiled region, in tiles. Used for 3D textures or arrays. For arrays, used for advancing in depth jumps to next slice of same mipmap size, which isn't contiguous in the subresource counting space if there are multiple mipmaps.
Describes the shape of a tile by specifying its dimensions.
-Texels are equivalent to pixels. For untyped buffer resources, a texel is just a byte. For multisample antialiasing (MSAA) surfaces, the numbers are still in terms of pixels/texels. - The values here are independent of the surface dimensions. Even if the surface is smaller than what would fit in a tile, the full tile dimensions are reported here. -
-The width in texels of the tile.
The height in texels of the tile.
The depth in texels of the tile.
Specifies the subresources from a resource that are accessible using an unordered-access view.
-An unordered-access-view description is passed into
Describes the elements in a buffer to use in a unordered-access view.
-This structure is used by a
The zero-based index of the first element to be accessed.
The number of elements in the resource. For structured buffers, this is the number of structures in the buffer.
View options for the resource (see
Describes a unordered-access 1D texture resource.
-This structure is used by a
The mipmap slice index.
Describes an array of unordered-access 1D texture resources.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
Describes a unordered-access 2D texture resource.
-This structure is used by a
The mipmap slice index.
Describes an array of unordered-access 2D texture resources.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
Describes a unordered-access 3D texture resource.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first depth slice to be accessed.
The number of depth slices.
The data format (see
The resource type (see
Specifies which buffer elements can be accessed (see
Specifies the subresources in a 1D texture that can be accessed (see
Specifies the subresources in a 1D texture array that can be accessed (see
Specifies the subresources in a 2D texture that can be accessed (see
Specifies the subresources in a 2D texture array that can be accessed (see
Specifies subresources in a 3D texture that can be accessed (see
Describes the subresources from a resource that are accessible using an unordered-access view.
-An unordered-access-view description is passed into
Describes a unordered-access 2D texture resource.
-The mipmap slice index.
The index (plane slice number) of the plane to use in the texture.
Describes an array of unordered-access 2D texture resources.
-The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
The index (plane slice number) of the plane to use in an array of textures.
A
A
A
A
A
A
A
A
Defines a color value for Microsoft Direct3D?11 video.
-The anonymous union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Specifies an RGB color value.
-The RGB values have a nominal range of [0...1]. For an RGB format with n bits per channel, the value of each color component is calculated as follows:
val = f * ((1 << n)-1)
For example, for RGB-32 (8 bits per channel), val = BYTE(f * 255.0)
.
The red value.
The green value.
The blue value.
The alpha value. Values range from 0 (transparent) to 1 (opaque). -
Describes the content-protection capabilities of a graphics driver.
-A bitwise OR of zero or more flags from the
The number of cryptographic key-exchange types that are supported by the driver. To get the list of key-exchange types, call the
The encyrption block size, in bytes. The size of data to be encrypted must be a multiple of this value.
The total amount of memory, in bytes, that can be used to hold protected surfaces.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides data to the
This structure is passed in the pContentKey parameter of the
Describes a compressed buffer for decoding.
-The type of buffer, specified as a member of the
Reserved.
The offset of the relevant data from the beginning of the buffer, in bytes. This value must be zero. -
The macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order. -
The macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order. -
The number of macroblocks of data in the buffer. This count includes skipped macroblocks.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
A reference to a buffer that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
The size of the buffer specified in the pIV parameter. If pIV is
If TRUE, the video surfaces are partially encrypted.
A
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a compressed buffer for decoding.
-The type of buffer.
The offset of the relevant data from the beginning of the buffer, in bytes. This value must be zero.
Size of the relevant data.
A reference to a buffer that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
The size of the buffer specified in the pIV parameter. If pIV is
A reference to an array of
Values in the sub sample mapping blocks are relative to the start of the decode buffer.
The number of
Describes the configuration of a Microsoft Direct3D?11 decoder device for DirectX Video Acceleration (DXVA).
-If the bitstream data buffers are encrypted using the D3D11CryptoSession mechanism, this
If the macroblock control data buffers are encrypted using the D3D11CryptoSession mechanism, this
If the residual difference decoding data buffers are encrypted using the D3D11CryptoSession mechanism, this
Indicates whether the host-decoder sends raw bit-stream data. If the value is 1, the data for the pictures will be sent in bit-stream buffers as raw bit-stream content. If the value is 0, picture data will be sent using macroblock control command buffers. If either ConfigResidDiffHost or ConfigResidDiffAccelerator is 1, the value must be 0.
Specifies whether macroblock control commands are in raster scan order or in arbitrary order. If the value is 1, the macroblock control commands within each macroblock control command buffer are in raster-scan order. If the value is 0, the order is arbitrary. For some types of bit streams, forcing raster order either greatly increases the number of required macroblock control buffers that must be processed, or requires host reordering of the control information. Therefore, supporting arbitrary order can be more efficient.
Contains the host residual difference configuration. If the value is 1, some residual difference decoding data may be sent as blocks in the spatial domain from the host. If the value is 0, spatial domain data will not be sent.
Indicates the word size used to represent residual difference spatial-domain blocks for predicted (non-intra) pictures when using host-based residual difference decoding.
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 1, the host will send residual difference spatial-domain blocks for non-intra macroblocks using 8-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 0, the host will send residual difference spatial-domain blocks of data for non-intra macroblocks using 16-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 0, ConfigSpatialResid8 must be 0.
For intra pictures, spatial-domain blocks must be sent using 8-bit samples if bits-per-pixel (BPP) is 8, and using 16-bit samples if BPP > 8. If ConfigIntraResidUnsigned is 0, these samples are sent as signed integer values relative to a constant reference value of 2^(BPP?1), and if ConfigIntraResidUnsigned is 1, these samples are sent as unsigned integer values relative to a constant reference value of 0.
If the value is 1, 8-bit difference overflow blocks are subtracted rather than added. The value must be 0 unless ConfigSpatialResid8 is 1.
The ability to subtract differences rather than add them enables 8-bit difference decoding to be fully compliant with the full ?255 range of values required in video decoder specifications, because +255 cannot be represented as the addition of two signed 8-bit numbers, but any number in the range ?255 can be represented as the difference between two signed 8-bit numbers (+255 = +127 minus ?128).
If the value is 1, spatial-domain blocks for intra macroblocks must be clipped to an 8-bit range on the host and spatial-domain blocks for non-intra macroblocks must be clipped to a 9-bit range on the host. If the value is 0, no such clipping is necessary by the host.
The value must be 0 unless ConfigSpatialResid8 is 0 and ConfigResidDiffHost is 1.
If the value is 1, any spatial-domain residual difference data must be sent in a chrominance-interleaved form matching the YUV format chrominance interleaving pattern. The value must be 0 unless ConfigResidDiffHost is 1 and the YUV format is NV12 or NV21.
Indicates the method of representation of spatial-domain blocks of residual difference data for intra blocks when using host-based difference decoding.
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 0, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 1, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
The value of the member must be 0 unless ConfigResidDiffHost is 1.
If the value is 1, transform-domain blocks of coefficient data may be sent from the host for accelerator-based IDCT. If the value is 0, accelerator-based IDCT will not be used. If both ConfigResidDiffHost and ConfigResidDiffAccelerator are 1, this indicates that some residual difference decoding will be done on the host and some on the accelerator, as indicated by macroblock-level control commands.
The value must be 0 if ConfigBitstreamRaw is 1.
If the value is 1, the inverse scan for transform-domain block processing will be performed on the host, and absolute indices will be sent instead for any transform coefficients. If the value is 0, the inverse scan will be performed on the accelerator.
The value must be 0 if ConfigResidDiffAccelerator is 0 or if Config4GroupedCoefs is 1.
If the value is 1, the IDCT specified in Annex W of ITU-T Recommendation H.263 is used. If the value is 0, any compliant IDCT can be used for off-host IDCT.
The H.263 annex does not comply with the IDCT requirements of MPEG-2 corrigendum 2, so the value must not be 1 for use with MPEG-2 video.
The value must be 0 if ConfigResidDiffAccelerator is 0, indicating purely host-based residual difference decoding.
If the value is 1, transform coefficients for off-host IDCT will be sent using the DXVA_TCoef4Group structure. If the value is 0, the DXVA_TCoefSingle structure is used. The value must be 0 if ConfigResidDiffAccelerator is 0 or if ConfigHostInverseScan is 1.
Specifies how many frames the decoder device processes at any one time.
Contains decoder-specific configuration information.
Describes a video stream for a Microsoft Direct3D?11 video decoder or video processor.
-The decoding profile. To get the list of profiles supported by the device, call the
The width of the video frame, in pixels.
The height of the video frame, in pixels.
The output surface format, specified as a
Contains driver-specific data for the
The exact meaning of each structure member depends on the value of Function.
-Describes a video decoder output view.
-The decoding profile. To get the list of profiles supported by the device, call the
The resource type of the view, specified as a member of the
A
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a sub sample mapping block.
-Values in the sub sample mapping blocks are relative to the start of the decode buffer.
-The number of clear (non-encrypted) bytes at the start of the block.
The number of encrypted bytes following the clear bytes.
Describes the capabilities of a Microsoft Direct3D?11 video processor.
-The video processor stores state information for each input stream. These states persist between blits. With each blit, the application selects which streams to enable or disable. Disabling a stream does not affect the state information for that stream.
The MaxStreamStates member gives the maximum number of stream states that can be saved. The MaxInputStreams member gives the maximum number of streams that can be enabled during a blit. These two values can differ.
-A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the D3D11_VIDEO_PROCESSPR_FILTER_CAPS enumeration.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of frame-rate conversion capabilities. To enumerate the frame-rate conversion capabilities, call the
The maximum number of input streams that can be enabled at the same time.
The maximum number of input streams for which the device can store state data.
Specifies the color space for video processing.
-The RGB_Range member applies to RGB output, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr output. If the driver performs color-space conversion on the background color, it uses the values that apply to both color spaces.
If the driver supports extended YCbCr (xvYCC), it returns the
If extended YCbCr is supported, it can be used with either transfer matrix. Extended YCbCr does not change the black point or white point?the black point is still 16 and the white point is still 235. However, extended YCbCr explicitly allows blacker-than-black values in the range 1?15, and whiter-than-white values in the range 236?254. When extended YCbCr is used, the driver should not clip the luma values to the nominal 16?235 range.
-Specifies whether the output is intended for playback or video processing (such as editing or authoring). The device can optimize the processing based on the type. The default state value is 0 (playback).
Value | Meaning |
---|---|
| Playback |
| Video processing |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255) |
| Limited range (16-235) |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601 |
| ITU-R BT.709 |
?
Specifies whether the output uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is zero (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr |
| Extended YCbCr (xvYCC) |
?
Specifies the
Introduced in Windows?8.1.
Reserved. Set to zero.
Describes a video stream for a video processor.
-A member of the
The frame rate of the input video stream, specified as a
The width of the input frames, in pixels.
The height of the input frames, in pixels.
The frame rate of the output video stream, specified as a
The width of the output frames, in pixels.
The height of the output frames, in pixels.
A member of the
Specifies a custom rate for frame-rate conversion or inverse telecine (IVTC).
-The CustomRate member gives the rate conversion factor, while the remaining members define the pattern of input and output samples.
-The ratio of the output frame rate to the input frame rate, expressed as a
The number of output frames that will be generated for every N input samples, where N = InputFramesOrFields.
If TRUE, the input stream must be interlaced. Otherwise, the input stream must be progressive.
The number of input fields or frames for every N output frames that will be generated, where N = OutputFrames.
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [?180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Describes a video processor input view.
-The surface format. If zero, the driver uses the DXGI format that was used to create the resource. If you are using feature level 9, the value must be zero.
The resource type of the view, specified as a member of the
A
Describes a video processor output view.
-The resource type of the view, specified as a member of the
A
Use this member of the union when ViewDimension equals
A
Use this member of the union when ViewDimension equals
Defines a group of video processor capabilities that are associated with frame-rate conversion, including deinterlacing and inverse telecine.
-The number of past reference frames required to perform the optimal video processing.
The number of future reference frames required to perform the optimal video processing.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of custom frame rates that the driver supports. To get the list of custom frame rates, call the
Contains stream-level data for the
If the stereo 3D format is
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides information about the input streams passed into the ID3DVideoContext1::VideoProcessorGetBehaviorHints method.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a video sample.
-The width of the video sample.
The height of the video sample.
The format of the video sample.
The colorspace of the sample.
Enables the application to defer the creation of an object. This interface is exposed by activation objects.
-Typically, the application calls some function that returns an
The class identifier that is associated with the activatable runtime class.
An optional friendly name for the activation object. The friendly name is stored in the object's
To create the Windows Runtime object, call
Creates the object associated with this activation object.
-Interface identifier (IID) of the requested interface.
A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object. Riid is provided via reflection on the COM object type
-A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Shuts down the created object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you create an object by calling
The component that calls ActivateObject?not the component that creates the activation object?is responsible for calling ShutdownObject. For example, in a typical playback application, the application creates activation objects for the media sinks, but the Media Session calls ActivateObject. Therefore the Media Session, not the application, calls ShutdownObject.
After ShutdownObject is called, the activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
-
Detaches the created object from the activation object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
The activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
The DetachObject method does not shut down the created object. If the DetachObject method succeeds, the client must shut down the created object. This rule applies only to objects that have a shutdown method or that support the
Implementation of this method is optional. If the activation object does not support this method, the method returns E_NOTIMPL.
-Provides information about the result of an asynchronous operation.
-Use this interface to complete an asynchronous operation. You get a reference to this interface when your callback object's
If you are implementing an asynchronous method, call
Any custom implementation of this interface must inherit the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Get or sets the status of the asynchronous operation.
-The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
-Receives a reference to the object's
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method.
-Receives a reference to the state object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no state object associated with this asynchronous result. |
?
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the status of the asynchronous operation.
-The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the status of the asynchronous operation.
-The status of the asynchronous operation.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you implement an asynchronous method, call SetStatus to set the status code for the operation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
-Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this asynchronous result. |
?
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method, without incrementing the object's reference count.
-Returns a reference to the state object's
This method cannot be called remotely.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
-The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
- Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
-Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the characteristics of the byte stream.
-Receives a bitwise OR of zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| The byte stream can be read. |
| The byte stream can be written to. |
| The byte stream can be seeked. |
| The byte stream is from a remote source, such as a network. |
| The byte stream represents a file directory. |
| Seeking within this stream might be slow. For example, the byte stream might download from a network. |
| The byte stream is currently downloading data to a local cache. Read operations on the byte stream might take longer until the data is completely downloaded. This flag is cleared after all of the data has been downloaded. If the MFBYTESTREAM_HAS_SLOW_SEEK flag is also set, it means the byte stream must download the entire file sequentially. Otherwise, the byte stream can respond to seek requests by restarting the download from a new point in the stream. |
| Another thread or process can open this byte stream for writing. If this flag is present, the length of thebyte stream could change while it is being read. This flag can affect the behavior of byte-stream handlers. For more information, see |
| The byte stream is not currentlyusing the network to receive the content. Networking hardwaremay enter a power saving state when this bit is set. Note??Requires Windows?8 or later. ? |
?
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the stream.
-Receives the length of the stream, in bytes. If the length is unknown, this value is -1.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the stream.
-Length of the stream in bytes.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current read or write position in the stream.
-Receives the current position, in bytes.
If this method succeeds, it returns
The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the current read or write position.
-New position in the stream, as a byte offset from the start of the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
If the new position is larger than the length of the stream, the method returns E_INVALIDARG.
Implementation notes: This method should update the current position in the stream by setting the current position to the value passed in to the qwPosition parameter. Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and Seek. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the current position has reached the end of the stream.
- Receives the value TRUE if the end of the stream has been reached, or
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Receives the number of bytes that are copied into the buffer. This parameter cannot be
If this method succeeds, it returns
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are Read, Write, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are BeginRead, Write, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous read operation.
- Pointer to the
Receives the number of bytes that were read.
If this method succeeds, it returns
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
Receives the number of bytes that are written.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were written to the stream, which is specified by the value returned in the pcbWritten, to the current position offset.
Other methods that can update the current position are Read, BeginRead, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be written to the stream, which is specified by the value returned in the pcbWritten, to the current position. Other methods that can update the current position are Read, BeginRead, Write, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous write operation.
-Pointer to the
Receives the number of bytes that were written.
If this method succeeds, it returns
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
Receives the new position after the seek.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Implementation notes: This method should update the current position in the stream by adding the qwSeekOffset to the seek SeekOrigin position. This should be the same value passed back in the pqwCurrentPosition parameter. - Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and SetCurrentPosition. -
-Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Controls one or more capture devices. The capture engine implements this interface. To get a reference to this interface, call either MFCreateCaptureEngine or
Creates an instance of the capture engine.
-The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Initializes the capture engine.
-A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
-Initializes the capture engine.
-A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
?
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Starts preview.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The preview sink was not initialized. |
?
Before calling this method, configure the preview sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STARTED event through the
After the preview sink is configured, you can stop and start preview by calling
Stops preview.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The capture engine is not currently previewing. |
?
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STOPPED event through the
Starts recording audio and/or video to a file.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The recording sink was not initialized. |
?
Before calling this method, configure the recording sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STARTED event through the
To stop recording, call
Stops recording.
-A Boolean value that specifies whether to finalize the output file. To create a valid output file, specify TRUE. Specify
A Boolean value that specifies if the unprocessed samples waiting to be encoded should be flushed.
If this method succeeds, it returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STOPPED event through the
Captures a still image from the video stream.
-If this method succeeds, it returns
Before calling this method, configure the photo sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PHOTO_TAKEN event through the
Gets a reference to one of the capture sink objects. You can use the capture sinks to configure preview, recording, or still-image capture.
-An
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
?
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
-Receives a reference to the
If this method succeeds, it returns
Creates an instance of the capture engine.
-To get a reference to this interface, call the CoCreateInstance function and specify the CLSID equal to
Calling the MFCreateCaptureEngine function is equivalent to calling
Creates an instance of the capture engine.
-The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Callback interface for receiving events from the capture engine.
-To set the callback interface on the capture engine, call the
Callback interface to receive data from the capture engine.
-To set the callback interface, call one of the following methods.
Extensions for the
Controls the photo sink. The photo sink captures still images from the video stream.
-The photo sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported.
To capture an image, call
Specifies a byte stream that will receive the still image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
-Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
-A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a byte stream that will receive the still image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Controls the preview sink. The preview sink enables the application to preview audio and video from the camera.
-To start preview, call
Sets a callback to receive the preview data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a window for preview.
-Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
-Gets or sets the current mirroring state of the video preview stream.
-Sets a custom media sink for preview.
-This method overrides the default selection of the media sink for preview.
-Specifies a window for preview.
-A handle to the window. The preview sink draws the video frames inside this window.
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
-A reference to a DirectComposition visual that implements the
If this method succeeds, it returns
Updates the video frame. Call this method when the preview window receives a WM_PAINT or WM_SIZE message.
-If this method succeeds, it returns
Sets a callback to receive the preview data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Gets the current mirroring state of the video preview stream.
-Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
-If TRUE, mirroring is enabled. If
If this method succeeds, it returns
Gets the rotation of the video preview stream.
-The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the video preview stream.
-The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Sets a custom media sink for preview.
-A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for preview.
-Controls the recording sink. The recording sink creates compressed audio/video files or compressed audio/video streams.
-The recording sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported. (However, if a callback is used, you can provide a separate callback for each stream.)
If the destination is a byte stream or an output file, the application specifies a container type, such as MP4 or ASF. The capture engine then multiplexes the audio and video to produce the format defined by the container type. If the destination is a callback interface, however, the capture engine does not multiplex or otherwise interleave the samples. The callback option gives you the most control over the recorded output, but requires more work by the application.
To start the recording, call
Specifies a byte stream that will receive the data for the recording.
-A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
-The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a custom media sink for recording.
-This method overrides the default selection of the media sink for recording.
-Specifies a byte stream that will receive the data for the recording.
-A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
-A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a custom media sink for recording.
-A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for recording.
-Gets the rotation that is currently being applied to the recorded video stream.
-The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the recorded video stream.
-The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Controls a capture sink, which is an object that receives one or more streams from a capture device.
-The capture engine creates the following capture sinks.
To get a reference to a capture sink, call
Sink | Interface |
---|---|
Photo sink | |
Preview sink | |
Recording sink | |
?
Applications cannot directly create the capture sinks.
If an image stream native media type is set to JPEG, the photo sink should be configured with a format identical to native source format. JPEG native type is passthrough only.
If an image stream native type is set to JPEG, to add an effect, change the native type on the image stream to an uncompressed video media type (such as NV12 or RGB32) and then add the effect.
If the native type is H.264 for the record stream, the record sink should be configured with the same media type. H.264 native type is passthrough only and cannot be decoded.
Record streams that expose H.264 do not expose any other type. H.264 record streams cannot be used in conjunction with effects. To add effects, instead connect the preview stream to the recordsink using AddStream.
-Queries the underlying Sink Writer object for an interface.
-Gets the output format for a stream on this capture sink.
-The zero-based index of the stream to query. The index is returned in the pdwSinkStreamIndex parameter of the
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSinkStreamIndex parameter is invalid. |
?
Queries the underlying Sink Writer object for an interface.
-Connects a stream from the capture source to this capture sink.
-The source stream to connect. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
An
A reference to the
Receives the index of the new stream on the capture sink. Note that this index will not necessarily match the value of dwSourceStreamIndex.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The format specified in pMediaType is not valid for this capture sink. |
| The dwSourceStreamIndex parameter is invalid, or the specified source stream was already connected to this sink. |
?
Prepares the capture sink by loading any required pipeline components, such as encoders, video processors, and media sinks.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
?
Calling this method is optional. This method gives the application an opportunity to configure the pipeline components before they are used. The method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_SINK_PREPARED event through the
Before calling this method, configure the capture sink by adding at least one stream. To add a stream, call
The Prepare method fails if the capture sink is currently in use. For example, calling Prepare on the preview sink fails if the capture engine is currently previewing.
-Removes all streams from the capture sink.
-If this method succeeds, it returns
You can use this method to reconfigure the sink.
-Receives state-change notifications from the presentation clock.
-To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
-Applies to: desktop apps only
Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
-This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
-Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
-This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
-Applies to: desktop apps only
Creates an instance of the Direct3D Device Manager.
-If this function succeeds, it returns
Sets the Direct3D device or notifies the device manager that the Direct3D device was reset.
-Pointer to the
Token received in the pResetToken parameter of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid token |
| Direct3D device error. |
?
When you first create the Direct3D device manager, call this method with a reference to the Direct3D device. The device manager does not create the device; the caller must provide the device reference initially.
Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device. This occurs if
The resetToken parameter ensures that only the component which originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
-Gets a handle to the Direct3D device.
-Receives the device handle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device manager was not initialized. The owner of the device must call |
?
To get the Direct3D device's
To test whether a device handle is still valid, call
Closes a Direct3D device handle. Call this method to release a device handle retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid handle. |
?
Tests whether a Direct3D device handle is valid.
-Handle to a Direct3D device. To get a device handle, call
The method returns an
Return code | Description |
---|---|
| The device handle is valid. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Gives the caller exclusive access to the Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Receives a reference to the device's
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parmater is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The device is locked and fBlock is |
| The specified handle is not a Direct3D device handle. |
?
When you are done using the Direct3D device, call
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
-Unlocks the Direct3D device. Call this method to release the device after calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified device handle is not locked, or is not a valid handle. |
?
Gets a DirectX Video Acceleration (DXVA) service interface.
- A handle to a Direct3D device. To get a device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device might support the following DXVA service interfaces:
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device does not support video acceleration. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The specified handle is not a Direct3D device handle. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Specifies how the output alpha values are calculated for Microsoft DirectX Video Acceleration High Definition (DXVA-HD) blit operations.
-The Mode member of the
To find out which modes the device supports, call the
Alpha values inside the target rectangle are set to opaque.
Alpha values inside the target rectangle are set to the alpha value specified in the background color. See
Existing alpha values remain unchanged in the output surface.
Alpha values from the input stream are scaled and copied to the corresponding destination rectangle for that stream. If the input stream does not have alpha data, the DXVA-HD device sets the alpha values in the target rectangle to an opaque value. If the input stream is disabled or the source rectangle is empty, the alpha values in the target rectangle are not modified.
Specifies state parameters for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
To set a state parameter, call the
Defines video processing capabilities for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The device can blend video content in linear color space. Most video content is gamma corrected, resulting in nonlinear values. If the DXVA-HD device sets this flag, it means the device converts colors to linear space before blending, which produces better results. -
The device supports the xvYCC color space for YCbCr data.
The device can perform range conversion when the input and output are both RGB but use different color ranges (0-255 or 16-235, for 8-bit RGB).
The device can apply a matrix conversion to YCbCr values when the input and output are both YCbCr. For example, the driver can convert colors from BT.601 to BT.709.
Specifies the type of Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-Hardware device. Video processing is performed in the GPU by the driver.
Software device. Video processing is performed in the CPU by a software plug-in.
Reference device. Video processing is performed in the CPU by a software plug-in.
Other. The device is neither a hardware device nor a software plug-in.
Specifies the intended use for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The graphics driver uses one of these enumeration constants as a hint when it creates the DXVA-HD device.
-Normal video playback. The graphics driver should expose a set of capabilities that are appropriate for real-time video playback.
Optimal speed. The graphics driver should expose a minimal set of capabilities that are optimized for performance.
Use this setting if you want better performance and can accept some reduction in video quality. For example, you might use this setting in power-saving mode or to play video thumbnails.
Optimal quality. The grahics driver should expose its maximum set of capabilities.
Specify this setting to get the best video quality possible. It is appropriate for tasks such as video editing, when quality is more important than speed. It is not appropriate for real-time playback.
Defines features that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device can support.
-The device can set the alpha values on the video output. See
The device can downsample the video output. See
The device can perform luma keying. See
The device can apply alpha values from color palette entries. See
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Defines capabilities related to image adjustment and filtering for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The device can adjust the brightness level.
The device can adjust the contrast level.
The device can adjust hue.
The device can adjust the saturation level.
The device can perform noise reduction.
The device can perform edge enhancement.
The device can perform anamorphic scaling. Anamorphic scaling can be used to stretch 4:3 content to a widescreen 16:9 aspect ratio.
Describes how a video stream is interlaced.
-Frames are progressive.
Frames are interlaced. The top field of each frame is displayed first.
Frame are interlaced. The bottom field of each frame is displayed first.
Defines capabilities related to input formats for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-These flags define video processing capabilities that are usually not needed, and therefore are not required for DXVA-HD devices to support.
The first three flags relate to RGB support for functions that are normally applied to YCbCr video: deinterlacing, color adjustment, and luma keying. A DXVA-HD device that supports these functions for YCbCr is not required to support them for RGB input. Supporting RGB input for these functions is an additional capability, reflected by these constants. The driver might convert the input to another color space, perform the indicated function, and then convert the result back to RGB.
Similarly, a device that supports de-interlacing is not required to support deinterlacing of palettized formats. This capability is indicated by the
The device can deinterlace an input stream that contains interlaced RGB video.
The device can perform color adjustment on RGB video.
The device can perform luma keying on RGB video.
The device can deinterlace input streams with palettized color formats.
Specifies the inverse telecine (IVTC) capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-The video processor can reverse 3:2 pulldown.
The video processor can reverse 2:2 pulldown.
The video processor can reverse 2:2:2:4 pulldown.
The video processor can reverse 2:3:3:2 pulldown.
The video processor can reverse 3:2:3:2:2 pulldown.
The video processor can reverse 5:5 pulldown.
The video processor can reverse 6:4 pulldown.
The video processor can reverse 8:7 pulldown.
The video processor can reverse 2:2:2:2:2:2:2:2:2:2:2:3 pulldown.
The video processor can reverse other telecine modes not listed here.
Describes how to map color data to a normalized [0...1] range.
These flags are used in the
For YUV colors, these flags specify how to convert between Y'CbCr and Y'PbPr. The Y'PbPr color space has a range of [0..1] for Y' (luma) and [-0.5...0.5] for Pb/Pr (chroma).
Value | Description |
---|---|
Should not be used for YUV data. | |
For 8-bit Y'CbCr components:
For samples with n bits of precision, the general equations are:
The inverse equations to convert from Y'CbCr to Y'PbPr are:
| |
For 8-bit Y'CbCr values, Y' range of [0..1] maps to [48...208]. |
?
For RGB colors, the flags differentiate various RGB spaces.
Value | Description |
---|---|
sRGB | |
Studio RGB; ITU-R BT.709 | |
ITU-R BT.1361 RGB |
?
Video data might contain values above or below the nominal range.
Note??The values named
This enumeration is equivalent to the DXVA_NominalRange enumeration used in DXVA 1.0, although it defines additional values.
If you are using the
Specifies the output frame rates for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This enumeration type is used in the
Specifies the processing capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-The video processor can perform blend deinterlacing.
In blend deinterlacing, the two fields from an interlaced frame are blended into a single progressive frame. A video processor uses blend deinterlacing when it deinterlaces at half rate, as when converting 60i to 30p. Blend deinterlacing does not require reference frames.
The video processor can perform bob deinterlacing.
In bob deinterlacing, missing field lines are interpolated from the lines above and below. Bob deinterlacing does not require reference frames.
The video processor can perform adaptive deinterlacing.
Adaptive deinterlacing uses spatial or temporal interpolation, and switches between the two on a field-by-field basis, depending on the amount of motion. If the video processor does not receive enough reference frames to perform adaptive deinterlacing, it falls back to bob deinterlacing.
The video processor can perform motion-compensated deinterlacing.
Motion-compensated deinterlacing uses motion vectors to recreate missing lines. If the video processor does not receive enough reference frames to perform motion-compensated deinterlacing, it falls back to bob deinterlacing.
The video processor can perform inverse telecine (IVTC).
If the video processor supports this capability, the ITelecineCaps member of the
The video processor can convert the frame rate by interpolating frames.
Describes the content of a video sample. These flags are used in the
This enumeration is equivalent to the DXVA_SampleFormat enumeration used in DXVA 1.0.
The following table shows the mapping from
No exact match. Use |
?
With the exception of
The value
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
- If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Describes a DirectX surface type for DirectX Video Acceleration (DXVA).
-The surface is a decoder render target.
The surface is a video processor render target.
The surface is a Direct3D texture render target.
Specifies the type of video surface created by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-If the DXVA-HD device is a software plug-in and the surface type is
A surface for an input stream. This surface type is equivalent to an off-screen plain surface in Microsoft Direct3D. The application can use the surface in Direct3D calls.
A private surface for an input stream. This surface type is equivalent to an off-screen plain surface, except that the application cannot use the surface in Direct3D calls.
A surface for an output stream. This surface type is equivalent to an off-screen plain surface in Direct3D. The application can use the surface in Direct3D calls.
This surface type is recommended for video processing applications that need to lock the surface and access the surface memory. For video playback with optimal performance, a render-target surface or swap chain is recommended instead.
Describes how chroma values are positioned relative to the luma samples in a YUV video frame. These flags are used in the
The following diagrams show the most common arrangements.
-Describes the intended lighting conditions for viewing video content. These flags are used in the
This enumeration is equivalent to the DXVA_VideoLighting enumeration used in DXVA 1.0.
If you are using the
Specifies the color primaries of a video source. These flags are used in the
Color primaries define how to convert RGB colors into the CIE XYZ color space, and can be used to translate colors between different RGB color spaces. An RGB color space is defined by the chromaticity coordinates (x,y) of the RGB primaries plus the white point, as listed in the following table.
Color space | (Rx, Ry) | (Gx, Gy) | (Bx, By) | White point (Wx, Wy) |
---|---|---|---|---|
BT.709 | (0.64, 0.33) | (0.30, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-2 System M; EBU 3212 | (0.64, 0.33) | (0.29, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-4 System B,G | (0.67, 0.33) | (0.21, 0.71) | (0.14, 0.08) | CIE III.C (0.310, 0.316) |
SMPTE 170M; SMPTE 240M; SMPTE C | (0.63, 0.34) | (0.31, 0.595) | (0.155, 0.07) | D65 (0.3127, 0.3291) |
?
The z coordinates can be derived from x and y as follows: z = 1 - x - y. To convert between RGB colors to CIE XYZ tristimulus values, compute a matrix T as follows:
Given T, you can use the following formulas to convert between an RGB color value and a CIE XYZ tristimulus value. These formulas assume that the RGB components are linear (not gamma corrected) and are normalized to the range [0...1].
To convert colors directly from one RGB color space to another, use the following formula, where T1 is the matrix for color space RGB1, and T2 is the matrix for color space RGB2.
For a derivation of these formulas, refer to Charles Poynton, Digital Video and HDTV Algorithms and Interfaces (Morgan Kaufmann, 2003).
This enumeration is equivalent to the DXVA_VideoPrimaries enumeration used in DXVA 1.0.
If you are using the
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B'). These flags are used in the
The following table shows the formulas for the most common transfer functions. In these formulas, L is the linear value and L' is the non-linear (gamma corrected) value. These values are relative to a normalized range [0...1].
Color space | Transfer function |
---|---|
sRGB (8-bit) | L' = 12.92L, for L < 0.031308 L' = 1.055L^1/2.4? 0.055, for L >= 0.031308 |
BT.470-2 System B, G | L' = L^0.36 |
BT.470-2 System M | L' = L^0.45 |
BT.709 | L' = 4.50L, for L < 0.018 L' = 1.099L^0.45? 0.099, for L >= 0.018 |
scRGB | L' = L |
SMPTE 240M | L' = 4.0L, for L < 0.0228 L' = 1.1115L^0.45? 0.01115, for L >= 0.0228 |
?
The following table shows the inverse formulas to obtain the original gamma-corrected values:
Color space | Transfer function |
---|---|
sRGB (8-bit) | L = 1/12.92L', for L' < 0.03928 L = ((L' + 0.055)/1055)^2.4, for L' >= 0.03928 |
BT.470-2 System B, G | L = L'^1/0.36 |
BT.470-2 System M | L = L'^1/0.45 |
BT.709 | L = L'/4.50, for L' < 0.081 L = ((L' + 0.099) / 1.099)^1/0.45, for L' >= 0.081 |
scRGB | L = L' |
SMPTE 240M | L = L'/4.0, for L' < 0.0913 L= ((L' + 0.1115)/1.1115)^1/0.45, for L' >= 0.0913 |
?
This enumeration is equivalent to the DXVA_VideoTransferFunction enumeration used in DXVA 1.0.
If you are using the
Bitmask to validate flag values. This value is not a valid flag.
Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SMPTE 274M. In addition BT-1361 uses this function within the range [0...1].
SMPTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'. These flags are used in the
The transfer matrices are defined as follows.
BT.709 transfer matrices:
Y' 0.212600 0.715200 0.072200 R'
- Pb = -0.114572 -0.385428 0.500000 x G'
- Pr 0.500000 -0.454153 -0.045847 B' R' 1.000000 0.000000 1.574800 Y'
- G' = 1.000000 -0.187324 -0.468124 x Pb
- B' 1.000000 1.855600 0.000000 Pr
-
BT.601 transfer matrices:
Y' 0.299000 0.587000 0.114000 R'
- Pb = -0.168736 -0.331264 0.500000 x G'
- Pr 0.500000 -0.418688 -0.081312 B' R' 1.000000 0.000000 1.402000 Y'
- G' = 1.000000 -0.344136 -0.714136 x Pb
- B' 1.000000 1.772000 0.000000 Pr
-
SMPTE 240M (SMPTE RP 145) transfer matrices:
Y' 0.212000 0.701000 0.087000 R'
- Pb = -0.116000 -0.384000 0.500000 x G'
- Pr 0.500000 -0.445000 -0.055000 B' R' 1.000000 -0.000000 1.576000 Y'
- G' = 1.000000 -0.227000 -0.477000 x Pb
- B' 1.000000 1.826000 0.000000 Pr
-
This enumeration is equivalent to the DXVA_VideoTransferMatrix enumeration used in DXVA 1.0.
If you are using the
Creates an instance of the Direct3D Device Manager.
-If this function succeeds, it returns
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
-Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
The interface identifier (IID) of the requested interface. Any of the following interfaces might be supported by the Direct3D device:
Receives a reference to the interface. The caller must release the interface.
If this function succeeds, it returns
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to the
A reference to a
A member of the
A reference to an initialization function for a software device. Set this reference if you are using a software plug-in device. Otherwise, set this parameter to
The function reference type is PDXVAHDSW_Plugin.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device does not support DXVA-HD. |
?
Use the
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-To find out which image filters the device supports, check the FilterCaps member of the
Applies to: desktop apps only
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-To find out which image filters the device supports, check the FilterCaps member of the
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-Creates one or more Microsoft Direct3D video surfaces.
-The width of each surface, in pixels.
The height of each surface, in pixels.
The pixel format, specified as a
The memory pool in which the surface is created. This parameter must equal the InputPool member of the
Reserved. Set to 0.
The type of surface to create, specified as a member of the
The number of surfaces to create.
A reference to an array of
Reserved. Set to
If this method succeeds, it returns
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to a
If this method succeeds, it returns
Gets a list of the output formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The number of formats to retrieve. This parameter must equal the OutputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets a list of the input formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The number of formats to retrieve. This parameter must equal the InputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets the capabilities of one or more Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processors.
-The number of elements in the pCaps array. This parameter must equal the VideoProcessorCount member of the
A reference to an array of
If this method succeeds, it returns
Gets a list of custom rates that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor supports. Custom rates are used for frame-rate conversion and inverse telecine (IVTC).
-A
The number of rates to retrieve. This parameter must equal the CustomRateCount member of the
A reference to an array of
If this method succeeds, it returns
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-The type of image filter, specified as a member of the
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Filter parameter is invalid or the device does not support the specified filter. |
?
To find out which image filters the device supports, check the FilterCaps member of the
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-A
Receives a reference to the
If this method succeeds, it returns
Applies to: desktop apps only
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to the
A reference to a
A member of the
Use the
Represents a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
To get a reference to this interface, call the
Sets a state parameter for a blit operation by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Gets the value of a state parameter for blit operations performed by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Sets a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Call this method to set state parameters that apply to individual input streams.
-Gets the value of a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Performs a video processing blit on one or more input samples and writes the result to a Microsoft Direct3D surface.
-A reference to the
Frame number of the output video frame, indexed from zero.
Number of input streams to process.
Pointer to an array of
If this method succeeds, it returns
The maximum value of StreamCount is given in the MaxStreamStates member of the
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Applies to: desktop apps only
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Creates a DirectX Video Acceleration (DXVA) video processor or DXVA decoder render target.
-The width of the surface, in pixels.
The height of the surface, in pixels.
The number of back buffers. The method creates BackBuffers + 1 surfaces.
The pixel format, specified as a
The memory pool in which to create the surface, specified as a
Reserved. Set this value to zero.
The type of surface to create. Use one of the following values.
Value | Meaning |
---|---|
Video decoder render target. | |
Video processor render target. Used for | |
Software render target. This surface type is for use with software DXVA devices. |
?
The address of an array of
A reference to a handle that is used to share the surfaces between Direct3D devices. Set this parameter to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid parameter |
| The DirectX Video Acceleration Manager is not initialized. |
| |
?
If the method returns E_FAIL, try calling
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Represents a DirectX Video Acceleration (DXVA) video decoder device.
To get a reference to this interface, call
The
Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
-Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
-Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
-Receives the device
Pointer to a
Pointer to a
Receives an array of
Receives the number of elements in the pppDecoderRenderTargets array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
If you specify a non-
Retrieves a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-Type of buffer to retrieve. Use one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Receives a reference to the start of the memory buffer.
Receives the size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The method locks the Direct3D surface that contains the buffer. When you are done using the buffer, call
This method might block if too many operations have been queued on the GPU. The method unblocks when a free buffer becomes available.
- Releases a buffer that was obtained by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the decoding operation.
-Pointer to the
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid surface type. See Remarks. |
?
After this method is called, call
Each call to BeginFrame must have a matching call to EndFrame, and BeginFrame calls cannot be nested.
DXVA 1.0 migration note: Unlike the IAMVideoAccelerator::BeginFrame method, which specifies the buffer as an index, this method takes a reference directly to the uncompressed buffer.
The surface pointed to by pRenderTarget must be created by calling
Signals the end of the decoding operation.
-Reserved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Executes a decoding operation on the current frame.
-Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Retrieves an array of GUIDs that identifies the decoder devices supported by the graphics hardware.
-Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the Count parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Error from the Direct3D device. |
| If the Microsoft Basic Display Adapter is being used or the Direct3D?11 device type is the reference rasterizer. These devices do not support video decoders. |
?
The following decoder GUIDs are defined. Some of these GUIDs have alternate names, shown in parentheses.
Description | |
---|---|
DXVA2_ModeH264_A (DXVA2_ModeH264_MoComp_NoFGT) | H.264 motion compensation (MoComp), no film grain technology (FGT). |
DXVA2_ModeH264_B (DXVA2_ModeH264_MoComp_FGT) | H.264 MoComp, FGT. |
DXVA2_ModeH264_C (DXVA2_ModeH264_IDCT_NoFGT) | H.264 inverse discrete cosine transform (IDCT), no FGT. |
DXVA2_ModeH264_D (DXVA2_ModeH264_IDCT_FGT) | H.264 IDCT, FGT. |
DXVA2_ModeH264_E (DXVA2_ModeH264_VLD_NoFGT) | H.264 VLD, no FGT. |
DXVA2_ModeH264_F (DXVA2_ModeH264_VLD_FGT) | H.264 variable-length decoder (VLD), FGT. |
DXVA2_ModeMPEG2_IDCT | MPEG-2 IDCT. |
DXVA2_ModeMPEG2_MoComp | MPEG-2 MoComp. |
DXVA2_ModeMPEG2_VLD | MPEG-2 VLD. |
DXVA2_ModeVC1_A (DXVA2_ModeVC1_PostProc) | VC-1 post processing. |
DXVA2_ModeVC1_B (DXVA2_ModeVC1_MoComp) | VC-1 MoComp. |
DXVA2_ModeVC1_C (DXVA2_ModeVC1_IDCT) | VC-1 IDCT. |
DXVA2_ModeVC1_D (DXVA2_ModeVC1_VLD) | VC-1 VLD. |
DXVA2_ModeWMV8_A (DXVA2_ModeWMV8_PostProc) | Windows Media Video 8 post processing. |
DXVA2_ModeWMV8_B (DXVA2_ModeWMV8_MoComp) | Windows Media Video 8 MoComp. |
DXVA2_ModeWMV9_A (DXVA2_ModeWMV9_PostProc) | Windows Media Video 9 post processing. |
DXVA2_ModeWMV9_B (DXVA2_ModeWMV9_MoComp) | Windows Media Video 9 MoComp. |
DXVA2_ModeWMV9_C (DXVA2_ModeWMV9_IDCT) | Windows Media Video 9 IDCT. |
?
-
Retrieves the supported render targets for a specified decoder device.
-Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the configurations that are available for a decoder device.
-A
A reference to a
Reserved. Set to
Receives the number of configurations.
Receives an array of
If this method succeeds, it returns
Creates a video decoder device.
-Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video decoder device.
-Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Sets the type of video memory for uncompressed video surfaces. This interface is used by video decoders and transforms.
The DirectShow enhanced video renderer (EVR) filter exposes this interface as a service on the filter's input pins. To obtain a reference to this interface, call
A video decoder can use this interface to enumerate the EVR filter's preferred surface types and then select the surface type. The decoder should then create surfaces of that type to hold the results of the decoding operation.
This interface does not define a way to clear the surface type. In the case of DirectShow, disconnecting two filters invalidates the surface type.
-
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
-By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
-
Retrieves a supported video surface type.
-Zero-based index of the surface type to retrieve. Surface types are indexed in order of preference, starting with the most preferred type.
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was out of range. |
?
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
-Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The renderer does not support the specified surface type. |
?
By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
-
Retrieves the parameters that were used to create this device.
-You can set any parameter to
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
-
Retrieves the capabilities of the video processor device.
-
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
-Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
-Receives the device
Pointer to a
Receives the render target format, specified as a
Receives the maximum number of streams supported by the device. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
Retrieves the capabilities of the video processor device.
-Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for a video processor (ProcAmp) setting on this video processor device.
-The ProcAmp setting to query. See ProcAmp Settings.
Pointer to a
If this method succeeds, it returns
Retrieves the range of values for an image filter supported by this device.
-Filter setting to query. For more information, see DXVA Image Filter Settings.
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Performs a video process operation on one or more input samples and writes the result to a Direct3D9 surface.
- A reference to the
A reference to a
A reference to an array of
The maximum number of input samples is given by the constant MAX_DEINTERLACE_SURFACES, defined in the header file dxva2api.h.
The number of elements in the pSamples array.
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Internal driver error. |
| Invalid arguments. |
?
When the method returns, the operation might not be complete.
If the method returns E_INVALIDARG, check for the following:
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Registers a software video processing device.
-Pointer to an initialization function.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets an array of GUIDs which identify the video processors supported by the graphics hardware.
- Pointer to a
Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the pCount parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
If this method succeeds, it returns
The following video processor GUIDs are predefined.
Description | |
---|---|
DXVA2_VideoProcBobDevice | Bob deinterlace device. This device uses a "bob" algorithm to deinterlace the video. Bob algorithms create missing field lines by interpolating the lines in a single field. |
DXVA2_VideoProcProgressiveDevice | Progressive video device. This device is available for progressive video, which does not require a deinterlace algorithm. |
DXVA2_VideoProcSoftwareDevice | Reference (software) device. |
?
The graphics device may define additional vendor-specific GUIDs. The driver provides the list of GUIDs in descending quality order. The mode with the highest quality is first in the list. To get the capabilities of each mode, call
Gets the render target formats that a video processor device supports. The list may include RGB and YUV formats.
- A
A reference to a
Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets a list of substream formats supported by a specified video processor device.
- A
A reference to a
The format of the render target surface, specified as a
Receives the number of elements returned in the ppFormats array.
Receives an array of
If this method succeeds, it returns
Gets the capabilities of a specified video processor device.
- A
A reference to a
The format of the render target surface, specified as a
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the range of values for a video processor (ProcAmp) setting.
-A
A reference to a
The format of the render target surface, specified as a
The ProcAmp setting to query. See ProcAmp Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for an image filter supported by a video processor device.
- A
A reference to a
The format of the render target surface, specified as a
The filter setting to query. See DXVA Image Filter Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video processor device.
-A
A reference to a
The format of the render target surface, specified as a
The maximum number of substreams that will be used with this device.
Receives a reference to the video processor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Contains an initialization vector (IV) for 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher encryption.
-For AES-CTR encyption, the pvPVPState member of the
The D3DAES_CTR_IV structure and the
The IV, in big-endian format.
The block count, in big-endian format.
Defines a 16-bit AYUV pixel value.
-Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Defines an 8-bit AYUV pixel value.
-Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Specifies how the output alpha values are calculated for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies the alpha fill mode, as a member of the
If the FeatureCaps member of the
The default state value is
Zero-based index of the input stream to use for the alpha values. This member is used when the alpha fill mode is
To get the maximum number of streams, call
Specifies the background color for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The background color is used to fill the target rectangle wherever no video image appears. Areas outside the target rectangle are not affected. See
The color space of the background color is determined by the color space of the output. See
The alpha value of the background color is used only when the alpha fill mode is
The default background color is full-range RGB black, with opaque alpha.
- If TRUE, the BackgroundColor member specifies a YCbCr color. Otherwise, it specifies an RGB color. The default device state is
A
Specifies whether the output is downsampled in a blit operation, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-If the Enable member is TRUE, the device downsamples the composed target rectangle to the size given in the Size member, and then scales it back to the size of the target rectangle.
The width and height of Size must be greater than zero. If the size is larger than the target rectangle, downsampling does not occur.
To use this state, the device must support downsampling, indicated by the
If the device does not support downsampling, the
Downsampling is sometimes used to reduce the quality of premium content when other forms of content protection are not available.
-If TRUE, downsampling is enabled. Otherwise, downsampling is disabled and the Size member is ignored. The default state value is
The sampling size. The default value is (1,1).
Specifies the output color space for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The RGB_Range member applies to RGB output, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) output. If the device performs color-space conversion on the background color, it uses the values that apply to both color spaces.
Extended YCbCr can be used with either transfer matrix. Extended YCbCr does not change the black point or white point?the black point is still 16 and the white point is still 235. However, extended YCbCr explicitly allows blacker-than-black values in the range 1?15, and whiter-than-white values in the range 236?254. When extended YCbCr is used, the driver should not clip the luma values to the nominal 16?235 range.
If the device supports extended YCbCr, it sets the
If the output format is a wide-gamut RGB format, output might fall outside the nominal [0...1] range of sRGB. This is particularly true if one or more input streams use extended YCbCr.
-Specifies whether the output is intended for playback or video processing (such as editing or authoring). The device can optimize the processing based on the type. The default state value is 0 (playback).
Value | Meaning |
---|---|
| Playback. |
| Video processing. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the output uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is zero (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Contains data for a private blit state for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set pData to
A
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device without validation.
Specifies the target rectangle for blitting, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies whether to use the target rectangle. The default state value is
Value | Meaning |
---|---|
| Use the target rectangle specified by the TargetRect member. |
Use the entire destination surface as the target rectangle. Ignore the TargetRect member. |
?
Specifies the target rectangle. The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Specifies an RGB color value.
-The RGB values have a nominal range of [0...1]. For an RGB format with n bits per channel, the value of each color component is calculated as follows:
val = f * ((1 << n)-1)
For example, for RGB-32 (8 bits per channel), val = BYTE(f * 255.0)
.
For full-range RGB, reference black is (0.0, 0.0, 0.0), which corresponds to (0, 0, 0) in an 8-bit representation. For limited-range RGB, reference black is (0.0625, 0.0625, 0.0625), which corresponds to (16, 16, 16) in an 8-bit representation. For wide-gamut formats, the values might fall outside of the [0...1] range.
-The red value.
The green value.
The blue value.
The alpha value. Values range from 0 (transparent) to 1 (opaque).
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Describes the configuration of a DXVA decoder device.
-Defines the encryption protocol type for bit-stream data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 0, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for macroblock control data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for residual difference decoding data buffers (buffers containing spatial-domain data or sets of transform-domain coefficients for accelerator-based IDCT). If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Indicates whether the host-decoder sends raw bit-stream data. If the value is 1, the data for the pictures will be sent in bit-stream buffers as raw bit-stream content. If the value is 0, picture data will be sent using macroblock control command buffers. If either ConfigResidDiffHost or ConfigResidDiffAccelerator is 1, the value must be 0.
Specifies whether macroblock control commands are in raster scan order or in arbitrary order. If the value is 1, the macroblock control commands within each macroblock control command buffer are in raster-scan order. If the value is 0, the order is arbitrary. For some types of bit streams, forcing raster order either greatly increases the number of required macroblock control buffers that must be processed, or requires host reordering of the control information. Therefore, supporting arbitrary order can be more efficient.
Contains the host residual difference configuration. If the value is 1, some residual difference decoding data may be sent as blocks in the spatial domain from the host. If the value is 0, spatial domain data will not be sent.
Indicates the word size used to represent residual difference spatial-domain blocks for predicted (non-intra) pictures when using host-based residual difference decoding.
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 1, the host will send residual difference spatial-domain blocks for non-intra macroblocks using 8-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 0, the host will send residual difference spatial-domain blocks of data for non-intra macroblocks using 16- bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 0, ConfigSpatialResid8 must be 0.
For intra pictures, spatial-domain blocks must be sent using 8-bit samples if bits-per-pixel (BPP) is 8, and using 16-bit samples if BPP > 8. If ConfigIntraResidUnsigned is 0, these samples are sent as signed integer values relative to a constant reference value of 2^(BPP?1), and if ConfigIntraResidUnsigned is 1, these samples are sent as unsigned integer values relative to a constant reference value of 0.
If the value is 1, 8-bit difference overflow blocks are subtracted rather than added. The value must be 0 unless ConfigSpatialResid8 is 1.
The ability to subtract differences rather than add them enables 8-bit difference decoding to be fully compliant with the full ?255 range of values required in video decoder specifications, because +255 cannot be represented as the addition of two signed 8-bit numbers, but any number in the range ?255 can be represented as the difference between two signed 8-bit numbers (+255 = +127 minus ?128).
If the value is 1, spatial-domain blocks for intra macroblocks must be clipped to an 8-bit range on the host and spatial-domain blocks for non-intra macroblocks must be clipped to a 9-bit range on the host. If the value is 0, no such clipping is necessary by the host.
The value must be 0 unless ConfigSpatialResid8 is 0 and ConfigResidDiffHost is 1.
If the value is 1, any spatial-domain residual difference data must be sent in a chrominance-interleaved form matching the YUV format chrominance interleaving pattern. The value must be 0 unless ConfigResidDiffHost is 1 and the YUV format is NV12 or NV21.
Indicates the method of representation of spatial-domain blocks of residual difference data for intra blocks when using host-based difference decoding.
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 0, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 1, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
The value of the member must be 0 unless ConfigResidDiffHost is 1.
If the value is 1, transform-domain blocks of coefficient data may be sent from the host for accelerator-based IDCT. If the value is 0, accelerator-based IDCT will not be used. If both ConfigResidDiffHost and ConfigResidDiffAccelerator are 1, this indicates that some residual difference decoding will be done on the host and some on the accelerator, as indicated by macroblock-level control commands.
The value must be 0 if ConfigBitstreamRaw is 1.
If the value is 1, the inverse scan for transform-domain block processing will be performed on the host, and absolute indices will be sent instead for any transform coefficients. If the value is 0, the inverse scan will be performed on the accelerator.
The value must be 0 if ConfigResidDiffAccelerator is 0 or if Config4GroupedCoefs is 1.
If the value is 1, the IDCT specified in Annex W of ITU-T Recommendation H.263 is used. If the value is 0, any compliant IDCT can be used for off-host IDCT.
The H.263 annex does not comply with the IDCT requirements of MPEG-2 corrigendum 2, so the value must not be 1 for use with MPEG-2 video.
The value must be 0 if ConfigResidDiffAccelerator is 0, indicating purely host-based residual difference decoding.
If the value is 1, transform coefficients for off-host IDCT will be sent using the DXVA_TCoef4Group structure. If the value is 0, the DXVA_TCoefSingle structure is used. The value must be 0 if ConfigResidDiffAccelerator is 0 or if ConfigHostInverseScan is 1.
Specifies how many frames the decoder device processes at any one time.
Contains decoder-specific configuration information.
Describes a video stream for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
The display driver can use the information in this structure to optimize the capabilities of the video processor. For example, some capabilities might not be exposed for high-definition (HD) content, for performance reasons.
-Frame rates are expressed as ratios. For example, 30 frames per second (fps) is expressed as 30:1, and 29.97 fps is expressed as 30000/1001. For interlaced content, a frame consists of two fields, so that the frame rate is half the field rate.
If the application will composite two or more input streams, use the largest stream for the values of InputWidth and InputHeight.
-A member of the
The frame rate of the input video stream, specified as a
The width of the input frames, in pixels.
The height of the input frames, in pixels.
The frame rate of the output video stream, specified as a
The width of the output frames, in pixels.
The height of the output frames, in pixels.
Specifies a custom rate for frame-rate conversion or inverse telecine (IVTC).
-The CustomRate member gives the rate conversion factor, while the remaining members define the pattern of input and output samples.
Here are some example uses for this structure:
Frame rate conversion from 60p to 120p (doubling the frame rate).
Reverse 2:3 pulldown (IVTC) from 60i to 24p.
(Ten interlaced fields are converted into four progressive frames.)
The ratio of the output frame rate to the input frame rate, expressed as a
The number of output frames that will be generated for every N input samples, where N = InputFramesOrFields.
If TRUE, the input stream must be interlaced. Otherwise, the input stream must be progressive.
The number of input fields or frames for every N output frames that will be generated, where N = OutputFrames.
Describes a buffer sent from a decoder to a DirectX Video Acceleration (DXVA) device.
-This structure corresponds closely to the DXVA_BufferDescription structure in DXVA 1, but some of the fields are no longer used in DXVA 2.
-Identifies the type of buffer passed to the accelerator. Must be one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Reserved. Set to zero.
Specifies the offset of the relevant data from the beginning of the buffer, in bytes. Currently this value must be zero.
Specifies the amount of relevant data in the buffer, in bytes. The location of the last byte of content in the buffer is DataOffset + DataSize ? 1.
Specifies the macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order.
Specifies the number of macroblocks of data in the buffer. This count includes skipped macroblocks. This value must be zero if the data buffer type is one of the following: picture decoding parameters, inverse-quantization matrix, AYUV, IA44/AI44, DPXD, Highlight, or DCCMD.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Pointer to a byte array that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
Contains parameters for the
Contains private data for the
This structure corresponds to parameters of the IAMVideoAccelerator::Execute method in DirectX Video Acceleration (DXVA) version 1.
-Describes the format of a video stream.
-Most of the values in this structure can be translated directly to and from
Describes the interlacing of the video frames. Contains a value from the
Describes the chroma siting. Contains a value from the
Describes the nominal range of the Y'CbCr or RGB color data. Contains a value from the
Describes the transform from Y'PbPr (component video) to studio R'G'B'. Contains a value from the
Describes the intended viewing conditions. Contains a value from the
Describes the color primaries. Contains a value from the
Describes the gamma correction transfer function. Contains a value from the
Use this member to access all of the bits in the union.
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Contains parameters for a DirectX Video Acceleration (DXVA) image filter.
-Filter level.
Filter threshold.
Filter radius.
Returns a
You can use this function for DirectX Video Acceleration (DXVA) operations that require alpha values expressed as fixed-point numbers.
-
Defines a video frequency.
-The value 0/0 indicates an unknown frequency. Values of the form n/0, where n is not zero, are invalid. Values of the form 0/n, where n is not zero, indicate a frequency of zero.
-Numerator of the frequency.
Denominator of the frequency.
Contains values for DirectX Video Acceleration (DXVA) video processing operations.
-Brightness value.
Contrast value.
Hue value.
Saturation value.
Contains a rational number (ratio).
-Values of the form 0/n are interpreted as zero. The value 0/0 is interpreted as zero. However, these values are not necessarily valid in all contexts.
Values of the form n/0, where n is nonzero, are invalid.
-The numerator of the ratio.
The denominator of the ratio.
Contains per-stream data for the
Specifies the planar alpha value for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-For each pixel, the destination color value is computed as follows:
Cd = Cs * (As * Ap * Ae) + Cd * (1.0 - As * Ap * Ae)
where
Cd
= Color value of the destination pixel.Cs
= Color value of source pixel.As
= Per-pixel source alpha.Ap
= Planar alpha value.Ae
= Palette-entry alpha value, or 1.0 (see Note).Note??Palette-entry alpha values apply only to palettized color formats, and only when the device supports the
The destination alpha value is computed according to the
To get the device capabilities, call
If TRUE, alpha blending is enabled. Otherwise, alpha blending is disabled. The default state value is
Specifies the planar alpha value as a floating-point number from 0.0 (transparent) to 1.0 (opaque).
If the Enable member is
Specifies the pixel aspect ratio (PAR) for the source and destination rectangles.
-Pixel aspect ratios of the form 0/n and n/0 are not valid.
If the Enable member is
If TRUE, the SourceAspectRatio and DestinationAspectRatio members contain valid values. Otherwise, the pixel aspect ratios are unspecified.
A
A
Specifies the format for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The surface format, specified as a
The default state value is
Specifies the destination rectangle for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies whether to use the destination rectangle, or use the entire output surface. The default state value is
Value | Meaning |
---|---|
| Use the destination rectangle given in the DestinationRect member. |
Use the entire output surface as the destination rectangle. |
?
The destination rectangle, which defines the portion of the output surface where the source rectangle is blitted. The destination rectangle is given in pixel coordinates, relative to the output surface. The default value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Specifies the level for a filtering operation on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-For a list of image filters that are defined for DXVA-HD, see
If TRUE, the filter is enabled. Otherwise, the filter is disabled.
The level for the filter. The meaning of this value depends on the implementation. To get the range and default value of a particular filter, call the
If the Enable member is
Specifies how a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream is interlaced.
-Some devices do not support interlaced RGB. Interlaced RGB support is indicated by the
Some devices do not support interlaced formats with palettized color. This support is indicated by the
To get the device's capabilities, call
The video interlacing, specified as a
The default state value is
Specifies the color space for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-The RGB_Range member applies to RGB input, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) input.
In some situations, the device might perform an intermediate color conversion on the input stream. If so, it uses the flags that apply to both color spaces. For example, suppose the device converts from RGB to YCbCr. If the RGB_Range member is 0 and the YCbCr_Matrix member is 1, the device will convert from full-range RGB to BT.709 YCbCr.
If the device supports xvYCC, it returns the
Specifies whether the input stream contains video or graphics. The device can optimize the processing based on the type. The default state value is 0 (video).
Value | Meaning |
---|---|
| Video. |
| Graphics. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the input stream uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is 0 (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
- If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Specifies the output frame rate for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The output rate might require the device to convert the frame rate of the input stream. If so, the value of RepeatFrame controls whether the device creates interpolated frames or simply repeats input frames.
-Specifies how the device performs frame-rate conversion, if required. The default state value is
Value | Meaning |
---|---|
| The device repeats frames. |
The device interpolates frames. |
?
Specifies the output rate, as a member of the
Specifies a custom output rate, as a
To get the list of custom rates supported by the video processor, call
Contains the color palette entries for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This stream state is used for input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use the first 16 entries in the list. Formats with 8 bpp use the first 256 entries.
If a pixel has a palette index greater than the number of entries, the device treats the pixel as being white with opaque alpha. For full-range RGB, this value will be (255, 255, 255, 255); for YCbCr the value will be (255, 235, 128, 128).
The caller allocates the pEntries array. Set the Count member to the number of elements in the array. When retrieving the state data, you can set the pEntries member to
If the DXVA-HD device does not have the
To get the device capabilities, call
The number of palette entries. The default state value is 0.
A reference to an array of
Contains data for a private stream state, for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set the pData member to
A
Value | Meaning |
---|---|
| Retrieves statistics about inverse telecine. The state data (pData) is a |
?
A device can define additional GUIDs for use with custom stream states. The interpretation of the data is then defined by the device.
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device, without validation.
Contains inverse telecine (IVTC) statistics from a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-If the DXVA-HD device supports IVTC statistics, it can detect when the input video contains telecined frames. You can use this information to enable IVTC in the device.
To enable IVTC statistics, do the following:
sizeof( )
.To get the most recent IVTC statistics from the device, call the
Typically, an application would use this feature as follows:
Specifies whether IVTC statistics are enabled. The default state value is
If the driver detects that the frames are telecined, and is able to perform inverse telecine, this field contains a member of the
The number of consecutive telecined frames that the device has detected.
The index of the most recent input field. The value of this member equals the most recent value of the InputFrameOrField member of the
Specifies the source rectangle for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD)
-Specifies whether to blit the entire input surface or just the source rectangle. The default state value is
Value | Meaning |
---|---|
| Use the source rectangle specified in the SourceRect member. |
Blit the entire input surface. Ignore the SourceRect member. |
?
The source rectangle, which defines the portion of the input sample that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Contains references to functions implemented by a software plug-in for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-If you provide a software plug-in for DXVA-HD, the plug-in must implement a set of functions that are defined by the function reference types in this structure.
At initialization, the DXVA-HD runtime calls the plug-in device's PDXVAHDSW_Plugin function. This function fills in a
Function reference of type PDXVAHDSW_CreateDevice.
Function reference of type PDXVAHDSW_ProposeVideoPrivateFormat.
Function reference of type PDXVAHDSW_GetVideoProcessorDeviceCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorOutputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorInputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorCustomRates.
Function reference of type PDXVAHDSW_GetVideoProcessorFilterRange.
Function reference of type PDXVAHDSW_DestroyDevice.
Function reference of type PDXVAHDSW_CreateVideoProcessor.
Function reference of type PDXVAHDSW_SetVideoProcessBltState.
Function reference of type PDXVAHDSW_GetVideoProcessBltStatePrivate.
Function reference of type PDXVAHDSW_SetVideoProcessStreamState.
Function reference of type PDXVAHDSW_GetVideoProcessStreamStatePrivate.
Function reference of type PDXVAHDSW_VideoProcessBltHD.
Function reference of type PDXVAHDSW_DestroyVideoProcessor.
Defines the range of supported values for a DirectX Video Acceleration (DXVA) operation.
-All values in this structure are specified as
Minimum supported value.
Maximum supported value.
Default value.
Minimum increment between values.
Describes a video stream for a DXVA decoder device or video processor device.
-The InputSampleFreq member gives the frame rate of the decoded video stream, as received by the video renderer. The OutputFrameFreq member gives the frame rate of the video that is displayed after deinterlacing. If the input video is interlaced and the samples contain interleaved fields, the output frame rate is twice the input frame rate. If the input video is progressive or contains single fields, the output frame rate is the same as the input frame rate.
Decoders should set the values of InputSampleFreq and OutputFrameFreq if the frame rate is known. Otherwise, set these members to 0/0 to indicate an unknown frame rate.
-Width of the video frame, in pixels.
Height of the video frame, in pixels.
Additional details about the video format, specified as a
Surface format, specified as a
Frame rate of the input video stream, specified as a
Frame rate of the output video, specified as a
Level of data protection required when the user accessible bus (UAB) is present. If TRUE, the video must be protected when a UAB is present. If
Reserved. Must be zero.
Contains parameters for the
Describes the capabilities of a DirectX Video Acceleration (DVXA) video processor mode.
-Identifies the type of device. The following values are defined.
Value | Meaning |
---|---|
DXVA 2.0 video processing is emulated by using DXVA 1.0. An emulated device may be missing significant processing capabilities and have lower image quality and performance. | |
Hardware device. | |
Software device. |
?
The Direct3D memory pool used by the device.
Number of forward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Number of backward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Reserved. Must be zero.
Identifies the deinteracing technique used by the device. This value is a bitwise OR of one or more of the following flags.
Value | Meaning |
---|---|
The algorithm is unknown or proprietary. | |
The algorithm creates missing lines by repeating the line either above or below the missing line. This algorithm produces a jagged image and is not recommended. | |
The algorithm creates missing lines by averaging two lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm creates missing lines by applying a [?1, 9, 9, ?1]/16 filter across four lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm uses median filtering to recreate the pixels in the missing lines. | |
The algorithm uses an edge filter to create the missing lines. In this process, spatial directional filters are applied to determine the orientation of edges in the picture content. Missing pixels are created by filtering along (rather than across) the detected edges. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a field-by-field basis, depending on the amount of motion. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a pixel-by-pixel basis, depending on the amount of motion. | |
The algorithm identifies objects within a sequence of video fields. Before it recreates the missing pixels, it aligns the movement axes of the individual objects in the scene to make them parallel with the time axis. | |
The device can undo the 3:2 pulldown process used in telecine. |
?
Specifies the available video processor (ProcAmp) operations. The value is a bitwise OR of ProcAmp Settings constants.
Specifies operations that the device can perform concurrently with the
Value | Meaning |
---|---|
The device can convert the video from YUV color space to RGB color space, with at least 8 bits of precision for each RGB component. | |
The device can stretch or shrink the video horizontally. If this capability is present, aspect ratio correction can be performed at the same time as deinterlacing. | |
The device can stretch or shrink the video vertically. If this capability is present, image resizing and aspect ratio correction can be performed at the same time. | |
The device can alpha blend the video. | |
The device can operate on a subrectangle of the video frame. If this capability is present, source images can be cropped before further processing occurs. | |
The device can accept substreams in addition to the primary video stream, and can composite them. | |
The device can perform color adjustments on the primary video stream and substreams, at the same time that it deinterlaces the video and composites the substreams. The destination color space is defined in the DestFormat member of the | |
The device can convert the video from YUV to RGB color space when it writes the deinterlaced and composited pixels to the destination surface. An RGB destination surface could be an off-screen surface, texture, Direct3D render target, or combined texture/render target surface. An RGB destination surface must use at least 8 bits for each color channel. | |
The device can perform an alpha blend operation with the destination surface when it writes the deinterlaced and composited pixels to the destination surface. | |
The device can downsample the output frame, as specified by the ConstrictionSize member of the | |
The device can perform noise filtering. | |
The device can perform detail filtering. | |
The device can perform a constant alpha blend to the entire video stream when it composites the video stream and substreams. | |
The device can perform accurate linear RGB scaling, rather than performing them in nonlinear gamma space. | |
The device can correct the image to compensate for artifacts introduced when performing scaling in nonlinear gamma space. | |
The deinterlacing algorithm preserves the original field lines from the interlaced field picture, unless scaling is also applied. For example, in deinterlacing algorithms such as bob and median filtering, the device copies the original field into every other scan line and then applies a filter to reconstruct the missing scan lines. As a result, the original field can be recovered by discarding the scan lines that were interpolated. If the image is scaled vertically, however, the original field lines cannot be recovered. If the image is scaled horizontally (but not vertically), the resulting field lines will be equivalent to scaling the original field picture. (In other words, discarding the interpolated scan lines will yield the same result as stretching the original picture without deinterlacing.) |
?
Specifies the supported noise filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Noise filtering is not supported. | |
Unknown or proprietary filter. | |
Median filter. | |
Temporal filter. | |
Block noise filter. | |
Mosquito noise filter. |
?
Specifies the supported detail filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Detail filtering is not supported. | |
Unknown or proprietary filter. | |
Edge filter. | |
Sharpen filter. |
?
Specifies an input sample for the
Specifies the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-A
The number of past reference frames required to perform the optimal video processing.
The number of future reference frames required to perform the optimal video processing.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of custom output frame rates. To get the list of custom frame rates, call the
Specifies the capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-In DXVA-HD, the device stores state information for each input stream. These states persist between blits. With each blit, the application selects which streams to enable or disable. Disabling a stream does not affect the state information for that stream.
The MaxStreamStates member gives the maximum number of stream states that can be set by the application. The MaxInputStreams member gives the maximum number of streams that can be enabled during a blit. These two values can differ.
To set the state data for a stream, call
Specifies the device type, as a member of the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The memory pool that is required for the input video surfaces.
The number of supported output formats. To get the list of output formats, call the
The number of supported input formats. To get the list of input formats, call the
The number of video processors. Each video processor represents a distinct set of processing capabilities. To get the capabilities of each video processor, call the
The maximum number of input streams that can be enabled at the same time.
The maximum number of input streams for which the device can store state data.
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the Microsoft Direct3D device or notifies the device manager that the Direct3D device was reset.
-A reference to the
When you first create the DXGI Device Manager, call this method with a reference to the Direct3D device. (The device manager does not create the device; the caller must provide the device reference initially.) Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device.
The resetToken parameter ensures that only the component that originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Unlocks the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Call this method to release the device after calling
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Queries the Microsoft Direct3D device for an interface.
-A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If the method returns
For more info see, Supporting Direct3D 11 Video Decoding in Media Foundation.
-Gives the caller exclusive access to the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device will support the following interfaces:
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parameter is
Receives a reference to the requested interface. The caller must release the interface.
When you are done using the Direct3D device, call
If the method returns
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
-Gets a handle to the Microsoft Direct3D device.
-Receives the device handle.
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Tests whether a Microsoft Direct3D device handle is valid.
-A handle to the Direct3D device. To get the device handle, call
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns
Unlocks the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Reserved.
If this method succeeds, it returns
Call this method to release the device after calling
Defines the ASF indexer options.
-The indexer creates a new index object.
The indexer returns values for reverse playback.
The indexer creates an index object for a live ASF stream.
Defines the ASF multiplexer options.
-The multiplexer automatically adjusts the bit rate of the ASF content in response to the characteristics of the streams being multiplexed.
Defines the selection options for an ASF stream.
-No samples from the stream are delivered.
Only samples from the stream that are clean points are delivered.
All samples from the stream are delivered.
Defines the ASF splitter options.
-The splitter delivers samples for the ASF content in reverse order to accommodate reverse playback.
The splitter delivers samples for streams that are protected with Windows Media Digital Rights Management.
Defines status conditions for the
Defines the ASF stream selector options.
-The stream selector will not set thinning. Thinning is the process of removing samples from a stream to reduce the bit rate.
The stream selector will use the average bit rate of streams when selecting streams.
Specifies the type of work queue for the
Defines flags for serializing and deserializing attribute stores.
-If this flag is set,
Specifies how to compare the attributes on two objects.
-Check whether all the attributes in pThis exist in pTheirs and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether all the attributes in pTheirs exist in pThis and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether both objects have identical attributes with the same data.
Check whether the attributes that exist in both objects have the same data.
Find the object with the fewest number of attributes, and check if those attributes exist in the other object and have the same data.
Defines the data type for a key/value pair.
-Unsigned 32-bit integer.
Unsigned 64-bit integer.
Floating-point number.
Byte array.
Specifies values for audio constriction.
-Values defined by the
Audio is not constricted.
Audio is down sampled to 48 kHz/16-bit.
Audio is down sampled to 44 kHz/16-bit.
Audio is down sampled to 14hKz/16-bit.
Audio is muted.
Contains flags for the
Specifies the origin for a seek request.
-The seek position is specified relative to the start of the stream.
The seek position is specified relative to the current read/write position in the stream.
Specifies a type of capture device.
-An audio capture device, such as a microphone.
A video capture device, such as a webcam.
Specifies a type of capture sink.
-A recording sink, for capturing audio and video to a file.
A preview sink, for previewing live audio or video.
A photo sink, for capturing still images.
Defines the values for the source stream category.
-Specifies a video preview stream.
Specifies a video capture stream.
Specifies an independent photo stream.
Specifies a dependent photo stream.
Specifies an audio stream.
Specifies an unsupported stream.
Contains flags that describe the characteristics of a clock. These flags are returned by the
Defines properties of a clock.
-Jitter values are always negative. In other words, the time returned by
Defines the state of a clock.
-The clock is invalid. A clock might be invalid for several reasons. Some clocks return this state before the first start. This state can also occur if the underlying device is lost.
The clock is running. While the clock is running, the time advances at the clock's frequency and current rate.
The clock is stopped. While stopped, the clock reports a time of 0.
The clock is paused. While paused, the clock reports the time it was paused.
Specifies how the topology loader connects a topology node. This enumeration is used with the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
-This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The DMO_INPUT_DATA_BUFFER_FLAGS
enumeration defines flags that describe an input buffer.
The beginning of the data is a synchronization point.
The buffer's time stamp is valid.
The buffer's indicated time length is valid.
The buffer's indicated time length is valid.
Media Foundation transforms (MFTs) are an evolution of the transform model first introduced with DirectX Media Objects (DMOs). This topic summarizes the main ways in which MFTs differ from DMOs. Read this topic if you are already familiar with the DMO interfaces, or if you want to convert an existing DMO into an MFT.
This topic contains the following sections:
The DMO_INPUT_STREAM_INFO_FLAGS
enumeration defines flags that describe an input stream.
The stream requires whole samples. Samples must not span multiple buffers, and buffers must not contain partial samples.
Each buffer must contain exactly one sample.
All the samples in this stream must be the same size.
The DMO performs lookahead on the incoming data, and may hold multiple input buffers for this stream.
The DMO_PROCESS_OUTPUT_FLAGS
enumeration defines flags that specify output processing requests.
Discard the output when the reference to the output buffer is
The DMO_SET_TYPE_FLAGS
enumeration defines flags for setting the media type on a stream.
The
Test the media type but do not set it.
Clear the media type that was set for the stream.
Contains flags that are used to configure the Microsoft DirectShow enhanced video renderer (EVR) filter.
-Enables dynamic adjustments to video quality during playback.
Specifies the requested access mode for opening a file.
-Read mode.
Write mode.
Read and write mode.
Specifies the behavior when opening a file.
-Use the default behavior.
Open the file with no system caching.
Subsequent open operations can have write access to the file.
Note??Requires Windows?7 or later. ?
Specifies how to open or create a file.
-Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
Describes the type of data provided by a frame source.
-The values of this enumeration are used with the MF_DEVICESTREAM_ATTRIBUTE_FRAMESOURCE_TYPES attribute.
-The frame source provides color data.
The frame source provides infrared data.
The frame source provides depth data.
The frame source provides custom data.
Specifies the likelihood that the Media Engine can play a specified type of media resource.
-The Media Engine cannot play the resource.
The Media Engine might be able to play the resource.
The Media Engine can probably play the resource.
Contains flags for the
Defines error status codes for the Media Engine.
-The values greater than zero correspond to error codes defined for the MediaError object in HTML5.
-No error.
The process of fetching the media resource was stopped at the user's request.
A network error occurred while fetching the media resource.
An error occurred while decoding the media resource.
The media resource is not supported.
An error occurred while encrypting the media resource.
Supported in Windows?8.1 and later.
Defines event codes for the Media Engine.
-The application receives Media Engine events through the
Values below 1000 correspond to events defined in HTML 5 for media elements.
-The Media Engine has started to load the source. See
The Media Engine is loading the source.
The Media Engine has suspended a load operation.
The Media Engine cancelled a load operation that was in progress.
An error occurred.
Event Parameter | Description |
---|---|
param1 | A member of the |
param2 | An |
?
The Media Engine has switched to the
The Load algorithm is stalled, waiting for data.
The Media Engine is switching to the playing state. See
The media engine has paused. See
The Media Engine has loaded enough source data to determine the duration and dimensions of the source.
The Media Engine has loaded enough data to render some content (for example, a video frame).
Playback has stopped because the next frame is not available.
Playback has started. See
Playback can start, but the Media Engine might need to stop to buffer more data.
The Media Engine can probably play through to the end of the resource, without stopping to buffer data.
The Media Engine has started seeking to a new playback position. See
The Media Engine has seeked to a new playback position. See
The playback position has changed. See
Playback has reached the end of the source. This event is not sent if the GetLoopis TRUE.
The playback rate has changed. See
The duration of the media source has changed. See
The audio volume changed. See
The output format of the media source has changed.
Event Parameter | Description |
---|---|
param1 | Zero if the video format changed, 1 if the audio format changed. |
param2 | Zero. |
?
The Media Engine flushed any pending events from its queue.
The playback position reached a timeline marker. See
The audio balance changed. See
The Media Engine has finished downloading the source data.
The media source has started to buffer data.
The media source has stopped buffering data.
The
The Media Engine's Load algorithm is waiting to start.
Event Parameter | Description |
---|---|
param1 | A handle to a waitable event, of type HANDLE. |
param2 | Zero. |
?
If Media Engine is created with the
If the Media Engine is not created with the
The first frame of the media source is ready to render.
Raised when a new track is added or removed.
Supported in Windows?8.1 and later.
Raised when there is new information about the Output Protection Manager (OPM).
This event will be raised when an OPM failure occurs, but ITA allows fallback without the OPM. In this case, constriction can be applied.
This event will not be raised when there is an OPM failure and the fallback also fails. For example, if ITA blocks playback entirely when OPM cannot be established.
Supported in Windows?8.1 and later.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Specifies media engine extension types.
-Specifies the content protection requirements for a video frame.
-The video frame should be protected.
Direct3D surface protection must be applied to any surface that contains the frame.
Direct3D anti-screen-scrape protection must be applied to any surface that contains the frame.
Defines media key error codes for the media engine.
-Unknown error occurred.
An error with the client occurred.
An error with the service occurred.
An error with the output occurred.
An error occurred related to a hardware change.
An error with the domain occurred.
Defines network status codes for the Media Engine.
-The initial state.
The Media Engine has started the resource selection algorithm, and has selected a media resource, but is not using the network.
The Media Engine is loading a media resource.
The Media Engine has started the resource selection algorithm, but has not selected a media resource.
Defines the status of the Output Protection Manager (OPM).
-Defines preload hints for the Media Engine. These values correspond to the preload attribute of the HTMLMediaElement interface in HTML5.
-The preload attribute is missing.
The preload attribute is an empty string. This value is equivalent to
The preload attribute is "none". This value is a hint to the user agent not to preload the resource.
The preload attribute is "metadata". This value is a hint to the user agent to fetch the resource metadata.
The preload attribute is "auto". This value is a hint to the user agent to preload the entire resource.
Contains flags that specify whether the Media Engine will play protected content, and whether the Media Engine will use the Protected Media Path (PMP).
-These flags are used with the
Defines ready-state values for the Media Engine.
-These values correspond to constants defined for the HTMLMediaElement.readyState attribute in HTML5.
-No data is available.
Some metadata is available, including the duration and, for video files, the video dimensions. No media data is available.
There is media data for the current playback position, but not enough data for playback or seeking.
There is enough media data to enable some playback or seeking. The amount of data might be a little as the next video frame.
There is enough data to play the resource, based on the current rate at which the resource is being fetched.
Specifies the layout for a packed 3D video frame.
-None.
The views are packed side-by-side in a single frame.
The views are packed top-to-bottom in a single frame.
Defines values for the media engine seek mode.
-This enumeration is used with the MediaEngineEx::SetCurrentTimeEx.
-Specifies normal seek.
Specifies an approximate seek.
Identifies statistics that the Media Engine tracks during playback. To get a playback statistic from the Media Engine, call
In the descriptions that follow, the data type and value-type tag for the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Identifies the kind of media stream that failed.
-The stream type is unknown.
The stream is an audio stream.
The stream is a video stream.
Defines the characteristics of a media source. These flags are retrieved by the
To skip forward or backward in a playlist, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies options for the
The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_CREATION_OPTIONS;
- Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags for the
Some of these flags, marked [out], convey information back to the MFPlay player object. The application should set or clear these flags as appropriate, before returning from the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags that describe a media item.
-The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_MEDIAITEM_CHARACTERISTICS;
-
Not supported.
Note??Earlier versions of this documentation described the _MFT_DRAIN_TYPE enumeration incorrectly. The enumeration is not supported. For more information, see
Defines flags for the
Indicates the status of an input stream on a Media Foundation transform (MFT).
-The input stream can receive more data at this time. To deliver more input data, call
Describes an input stream on a Media Foundation transform (MFT).
-Before the client sets the media types on the transform, the only flags guaranteed to be accurate are the
In the default processing model, an MFT holds a reference count on the sample that it receives in ProcessInput. It does not process the sample immediately inside ProcessInput. When ProcessOutput is called, the MFT produces output data and then discards the input sample. The following variations on this model are defined:
If an MFT never holds onto input samples between ProcessInput and ProcessOutput, it can set the
If an MFT holds some input samples beyond the next call to ProcessOutput, it can set the
Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each media sample that the client provides as input must contain exactly one unit of data, as defined for the
If this flag is present, the
An MFT that processes uncompressed audio should not set this flag. The MFT should accept buffers that contain more than a single audio frame, for efficiency.
All input samples must be the same size. The size is given in the cbSize member of the
The MFT might hold one or more input samples after
The MFT does not hold input samples after the
If this flag is absent, the MFT might hold a reference count on the samples that are passed to the ProcessInput method. The client must not re-use or delete the buffer memory until the MFT releases the sample's
If this flag is absent, it does not guarantee that the MFT holds a reference count on the input samples. It is valid for an MFT to release input samples in ProcessInput even if the MFT does not set this flag. However, setting this flag might enable to client to optimize how it re-uses buffers.
An MFT should not set this flag if it ever holds onto an input sample after returning from ProcessInput.
This input stream can be removed by calling
This input stream is optional. The transform can produce output without receiving input from this stream. The caller can deselect the stream by not setting a media type or by setting a
The MFT can perform in-place processing. In this mode, the MFT directly modifies the input buffer. When the client calls ProcessOutput, the same sample that was delivered to this stream is returned in the output stream that has a matching stream identifier. This flag implies that the MFT holds onto the input buffer, so this flag cannot combined with the
If this flag is present, the MFT must set the
Defines flags for the
The values in this enumeration are not bit flags, so they should not be combined with a bitwise OR. Also, the caller should test for these flags with the equality operator, not a bitwise AND:
// Correct.
- if (Buffer.dwStatus == )
- { ...
- } // Incorrect.
- if ((Buffer.dwStatus & ) != 0)
- { ...
- }
-
-
Indicates whether a Media Foundation transform (MFT) can produce output data.
-There is a sample available for at least one output stream. To retrieve the available output samples, call
Describes an output stream on a Media Foundation transform (MFT).
-Before the client sets the media types on the MFT, the only flag guaranteed to be accurate is the
The
MFT_OUTPUT_STREAM_DISCARDABLE: The MFT discards output data only if the client calls ProcessOutput with the
MFT_OUTPUT_STREAM_LAZY_READ: If the client continues to call ProcessInput without collecting the output from this stream, the MFT eventually discards the output. If all output streams have the
If neither of these flags is set, the MFT never discards output data.
-Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each output sample contains exactly one unit of data, as defined for the
If this flag is present, the
An MFT that outputs uncompressed audio should not set this flag. For efficiency, it should output more than one audio frame at a time.
All output samples are the same size.
The MFT can discard the output data from this output stream, if requested by the client. To discard the output, set the
This output stream is optional. The client can deselect the stream by not setting a media type or by setting a
The MFT provides the output samples for this stream, either by allocating them internally or by operating directly on the input samples. The MFT cannot use output samples provided by the client for this stream.
If this flag is not set, the MFT must set cbSize to a nonzero value in the
The MFT can either provide output samples for this stream or it can use samples that the client allocates. This flag cannot be combined with the
If the MFT does not set this flag or the
The MFT does not require the client to process the output for this stream. If the client continues to send input data without getting the output from this stream, the MFT simply discards the previous input.
The MFT might remove this output stream during streaming. This flag typically applies to demultiplexers, where the input data contains multiple streams that can start and stop during streaming. For more information, see
Defines flags for the setting or testing the media type on a Media Foundation transform (MFT).
-Test the proposed media type, but do not set it.
Defines the different error states of the Media Source Extension.
-Specifies no error.
Specifies an error with the network.
Specifies an error with decoding.
Specifies an unknown error.
Defines the different ready states of the Media Source Extension.
-The media source is closed.
The media source is open.
The media source is ended.
Specifies how the user's credentials will be used.
-The credentials will be used to authenticate with a proxy.
The credentials will be sent over the network unencrypted.
The credentials must be from a user who is currently logged on.
Describes options for the caching network credentials.
-Allow the credential cache object to save credentials in persistant storage.
Do not allow the credential cache object to cache the credentials in memory. This flag cannot be combined with the
The user allows credentials to be sent over the network in clear text.
By default,
Do not set this flag without notifying the user that credentials might be sent in clear text.
Specifies how the credential manager should obtain user credentials.
-The application implements the credential manager, which must expose the
The credential cache object sets the
The credential manager should prompt the user to provide the credentials.
Note??Requires Windows?7 or later. ?
The credentials are saved to persistent storage. This flag acts as a hint for the application's UI. If the application prompts the user for credentials, the UI can indicate that the credentials have already been saved.
Specifies how the default proxy locator will specify the connection settings to a proxy server. The application must set these values in the MFNETSOURCE_PROXYSETTINGS property.
-
Defines the status of the cache for a media file or entry.
-The cache for a file or entry does not exist.
The cache for a file or entry is growing.
The cache for a file or entry is completed.
Indicates the type of control protocol that is used in streaming or downloading.
-The protocol type has not yet been determined.
The protocol type is HTTP. This includes HTTPv9, WMSP, and HTTP download.
The protocol type is Real Time Streaming Protocol (RTSP).
The content is read from a file. The file might be local or on a remote share.
The protocol type is multicast.
Note??Requires Windows?7 or later. ?Defines statistics collected by the network source. The values in this enumeration define property identifiers (PIDs) for the MFNETSOURCE_STATISTICS property.
To retrieve statistics from the network source, call
In the descriptions that follow, the data type and value-type tag for the
Describes the type of transport used in streaming or downloading data (TCP or UDP).
-The data transport type is UDP.
The data transport type is TCP.
Specifies whether color data includes headroom and toeroom. Headroom allows for values beyond 1.0 white ("whiter than white"), and toeroom allows for values below reference 0.0 black ("blacker than black").
- This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_NominalRange enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown nominal range.
Equivalent to
Equivalent to
The normalized range [0...1] maps to [0...255] for 8-bit samples or [0...1023] for 10-bit samples.
The normalized range [0...1] maps to [16...235] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [48...208] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [64...127] for 8-bit samples or [256...508] for 10-bit samples. This range is used in the xRGB color space.
Note??Requires Windows?7 or later. ?
Defines the object types that are created by the source resolver.
-Media source. You can query the object for the
Byte stream. You can query the object for the
Invalid type.
Defines protection levels for MFPROTECTION_ACP.
-Specifies ACP is disabled.
Specifies ACP is level one.
Specifies ACP is level two.
Specifies ACP is level three.
Reserved.
Defines protection levels for MFPROTECTION_CGMSA.
-These flags are equivalent to the OPM_CGMSA_Protection_Level enumeration constants used in the Output Protection Protocol (OPM).
-CGMS-A is disabled.
The protection level is Copy Freely.
The protection level is Copy No More.
The protection level is Copy One Generation.
The protection level is Copy Never.
Redistribution control (also called the broadcast flag) is required. This flag can be combined with the other flags.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Defines event types for the
For each event type, the
In your implementation of OnMediaPlayerEvent, you must cast the pEventHeader parameter to the correct structure type. A set of macros is defined for this purpose. These macros check the value of the event type and return
Event type | Event structure Pointer cast macro |
MFP_GET_PLAY_EVENT | |
MFP_GET_PAUSE_EVENT | |
MFP_GET_STOP_EVENT | |
MFP_GET_POSITION_SET_EVENT | |
MFP_GET_RATE_SET_EVENT | |
MFP_GET_MEDIAITEM_CREATED_EVENT | |
MFP_GET_MEDIAITEM_SET_EVENT | |
MFP_GET_FRAME_STEP_EVENT | |
MFP_GET_MEDIAITEM_CLEARED_EVENT | |
MFP_GET_MF_EVENT | |
MFP_GET_ERROR_EVENT | |
MFP_GET_PLAYBACK_ENDED_EVENT | |
MFP_GET_ACQUIRE_USER_CREDENTIAL_EVENT |
?
-Defines policy settings for the
Specifies the object type for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies the current playback state.
- Contains flags that define the behavior of the
Defines actions that can be performed on a stream.
-No action.
Play the stream.
Copy the stream.
Export the stream to another format.
Extract the data from the stream and pass it to the application. For example, acoustic echo cancellation requires this action.
Reserved.
Reserved.
Reserved.
Last member of the enumeration.
Contains flags for the
If the decoder sets the
Specifies how aggressively a pipeline component should drop samples.
-In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode. The specific algorithm used depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels.
For example, suppose an encoded video stream has three B-frames between each pair of P-frames. A decoder might implement the following drop modes:
Mode 1: Drop one out of every three B frames.
Mode 2: Drop one out of every two B frames.
Mode 3: Drop all delta frames.
Modes 4 and 5: Unsupported.
The enhanced video renderer (EVR) can drop video frames before sending them to the EVR mixer.
-Normal processing of samples. Drop mode is disabled.
First drop mode (least aggressive).
Second drop mode.
Third drop mode.
Fourth drop mode.
Fifth drop mode (most aggressive, if it is supported; see Remarks).
Maximum number of drop modes. This value is not a valid flag.
Specifies the quality level for a pipeline component. The quality level determines how the component consumes or produces samples.
-Each successive quality level decreases the amount of processing that is needed, while also reducing the resulting quality of the audio or video. The specific algorithm used to reduce quality depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels. Also, the same quality level might not be comparable between two different components.
Video decoders can often reduce quality by leaving out certain post-processing steps. The enhanced video renderer (EVR) can sometimes reduce quality by switching to a different deinterlacing mode.
-Normal quality.
One level below normal quality.
Two levels below normal quality.
Three levels below normal quality.
Four levels below normal quality.
Five levels below normal quality.
Maximum number of quality levels. This value is not a valid flag.
Specifies the direction of playback (forward or reverse).
-Forward playback.
Reverse playback.
Defines the version number for sample protection.
-No sample protection.
Version 1.
Version 2.
Version 3.
Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
-Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Specifies how to open or create a file.
-Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies whether a stream associated with an
Contains flags for adding a topology to the sequencer source, or updating a topology already in the queue.
-This topology is the last topology in the sequence.
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
-This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
Defines flags for the
Defines the behavior of the
These flags are optional, and are not mutually exclusive. If no flags are set, the Media Session resolves the topology and then adds it to the queue of pending presentations.
- Describes the current status of a call to the
Specifies how the ASF file sink should apply Windows Media DRM.
-Undefined action.
Encode the content using Windows Media DRM. Use this flag if the source content does not have DRM protection.
Transcode the content using Windows Media DRM. Use this flag if the source content has Windows Media DRM protection and you want to change the encoding parameters but not the DRM protection.
Transcrypt the content. Use this flag if the source content has DRM protection and you want to change the DRM protection; for example, if you want to convert from Windows Media DRM version 1 to Windows Media DRM version 7 or later.
Reserved. Do not use.
Contains flags for the
Contains flags that indicate the status of the
Contains values that specify common video formats.
-Reserved; do not use.
NTSC television (720 x 480i).
PAL television (720 x 576i).
DVD, NTSC standard (720 x 480).
DVD, PAL standard (720 x 576).
DV video, PAL standard.
DV video, NTSC standard.
ATSC digital television, SD (480i).
ATSC digital television, HD interlaced (1080i)
ATSC digital television, HD progressive (720p)
Defines stream marker information for the
If the Streaming Audio Renderer receives an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned in its parent block element.
-Text is aligned at the start of its parent block element.
Text is aligned at the end of its parent block element.
Text is aligned in the center of its parent block element.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the type of a timed text cue event.
-The cue has become active.
The cue has become inactive.
All cues have been deactivated.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is decorated (underlined and so on).
-Text isn't decorated.
Text is underlined.
Text has a line through it.
Text has a line over it.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned with the display.
-Text is aligned before an element.
Text is aligned after an element.
Text is aligned in the center between elements.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind error that occurred with a timed text track.
-This enumeration is used to return error information from the
No error occurred.
A fatal error occurred.
An error occurred with the data format of the timed text track.
A network error occurred when trying to load the timed text track.
An internal error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the font style of the timed text.
-The font style is normal, sometimes referred to as roman.
The font style is oblique.
The font style is italic.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text appears when the parent element is scrolled.
-Text pops on when the parent element is scrolled.
Text rolls up when the parent element is scrolled.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind of timed text track.
-The kind of timed text track is unknown.
The kind of timed text track is subtitles.
The kind of timed text track is closed captions.
The kind of timed text track is metadata.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the units in which the timed text is measured.
-The timed text is measured in pixels.
The timed text is measured as a percentage.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the sequence in which text is written on its parent element.
-Text is written from left to right and top to bottom.
Text is written from right to left and top to bottom.
Text is written from top to bottom and right to left.
Text is written from top to bottom and left to right.
Text is written from left to right.
Text is written from right to left.
Text is written from top to bottom.
Contains flags for the
Defines messages for a Media Foundation transform (MFT). To send a message to an MFT, call
Some messages require specific actions from the MFT. These events have "MESSAGE" in the message name. Other messages are informational; they notify the MFT of some action by the client, and do not require any particular response from the MFT. These messages have "NOTIFY" in the messages name. Except where noted, an MFT should not rely on the client sending notification messages.
-Specifies whether the topology loader enables Microsoft DirectX Video Acceleration (DXVA) in the topology.
-This enumeration is used with the
If an MFT supports DXVA, the MFT must return TRUE for the
Previous versions of Microsoft Media Foundation supported DXVA only for decoders.
-The topology loader enables DXVA - on the decoder if possible, and drops optional Media Foundation transforms (MFTs) that do not support DXVA.
The topology loader disables all video acceleration. This setting forces software processing, even when the decoder supports DXVA.
The topology loader enables DXVA on every MFT that supports it.
Specifies whether the topology loader will insert hardware-based Media Foundation transforms (MFTs) into the topology.
- This enumeration is used with the
Use only software MFTs. Do not use hardware-based MFTs. This mode is the default, for backward compatibility with existing applications.
Use hardware-based MFTs when possible, and software MFTs otherwise. This mode is the recommended one.
If hardware-based MFTs are available, the topoloader will insert them. If not, the connection will fail.
Supported in Windows?8.1 and later.
Defines status flags for the
Specifies the status of a topology during playback.
- This enumeration is used with the
For a single topology, the Media Session sends these status flags in numerical order, starting with
This value is not used.
The topology is ready to start. After this status flag is received, you can use the Media Session's
The Media Session has started to read data from the media sources in the topology.
The Media Session modified the topology, because the format of a stream changed.
The media sinks have switched from the previous topology to this topology. This status value is not sent for the first topology that is played. For the first topology, the
Playback of this topology is complete. The Media Session might still use the topology internally. The Media Session does not completely release the topology until it sends the next
Defines the type of a topology node.
-Output node. Represents a media sink in the topology.
Source node. Represents a media stream in the topology.
Transform node. Represents a Media Foundation Transform (MFT) in the topology.
Tee node. A tee node does not hold a reference to an object. Instead, it represents a fork in the stream. A tee node has one input and multiple outputs, and samples from the upstream node are delivered to all of the downstream nodes.
Reserved.
Defines at what times a transform in a topology is drained.
-The transform is drained when the end of a stream is reached. It is not drained when markout is reached at the end of a segment.
The transform is drained whenever a topology ends.
The transform is never drained.
Defines when a transform in a topology is flushed.
-The transform is flushed whenever the stream changes, including seeks and new segments.
The transform is flushed when seeking is performed on the stream.
The transform is never flushed during streaming. It is flushed only when the object is released.
Defines the profile flags that are set in the
These flags are checked by
For more information about the stream settings that an application can specify, see Using the Transcode API.
-If the
The
For the video stream, the required attributes are as follows:
If these attributes are not set,
Use the
For example, assume that your input source is an MP3 file. You set the container to be
Defines flags for the
Contains flags for registering and enumeration Media Foundation transforms (MFTs).
These flags are used in the following functions:
For registration, these flags describe the MFT that is being registered. Some flags do not apply in that context. For enumeration, these flags control which MFTs are selected in the enumeration. For more details about the precise meaning of these flags, see the reference topics for
For registration, the
Defines flags for processing output samples in a Media Foundation transform (MFT).
-Do not produce output for streams in which the pSample member of the
Regenerates the last output sample.
Note Requires Windows?8.
Indicates the status of a call to
If the MFT sets this flag, the ProcessOutput method returns
Call
Call
Call
Until these steps are completed, all further calls to ProcessOutput return
Indicates whether the URL is from a trusted source.
-The validity of the URL cannot be guaranteed because it is not signed. The application should warn the user.
The URL is the original one provided with the content.
The URL was originally signed and has been tampered with. The file should be considered corrupted, and the application should not navigate to the URL without issuing a strong warning the user.
Specifies how 3D video frames are stored in memory.
-This enumeration is used with the
The base view is stored in a single buffer. The other view is discarded.
Each media sample contains multiple buffers, one for each view.
Each media sample contains one buffer, with both views packed side-by-side into a single frame.
Each media sample contains one buffer, with both views packed top-and-bottom into a single frame.
Specifies how to output a 3D stereoscopic video stream.
-This enumeration is used with the
Output the base view only. Discard the other view.
Output a stereo view (two buffers).
Specifies how a 3D video frame is stored in a media sample.
-This enumeration is used with the
The exact layout of the views in memory is specified by the following media type attributes:
Each view is stored in a separate buffer. The sample contains one buffer per view.
All of the views are stored in the same buffer. The sample contains a single buffer.
Specifies the aspect-ratio mode.
-Do not maintain the aspect ratio of the video. Stretch the video to fit the output rectangle.
Preserve the aspect ratio of the video by letterboxing or within the output rectangle.
Correct the aspect ratio if the physical size of the display device does not match the display resolution. For example, if the native resolution of the monitor is 1600 by 1200 (4:3) but the display resolution is 1280 by 1024 (5:4), the monitor will display non-square pixels.
If this flag is set, you must also set the
Apply a non-linear horizontal stretch if the aspect ratio of the destination rectangle does not match the aspect ratio of the source rectangle.
The non-linear stretch algorithm preserves the aspect ratio in the middle of the picture and stretches (or shrinks) the image progressively more toward the left and right. This mode is useful when viewing 4:3 content full-screen on a 16:9 display, instead of pillar-boxing. Non-linear vertical stretch is not supported, because the visual results are generally poor.
This mode may cause performance degradation.
If this flag is set, you must also set the
Contains flags that define the chroma encoding scheme for Y'Cb'Cr' data.
-These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoChromaSubSampling enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown encoding scheme.
Chroma should be reconstructed as if the underlying video was progressive content, rather than skipping fields or applying chroma filtering to minimize artifacts from reconstructing 4:2:0 interlaced chroma.
Chroma samples are aligned horizontally with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel to the right of the corresponding luma sample.
Chroma samples are aligned vertically with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel down from the corresponding luma sample.
The U and V planes are aligned vertically. If this flag is not set, the chroma planes are assumed to be out of phase by 1/2 chroma sample, alternating between a line of U followed by a line of V.
Specifies the chroma encoding scheme for MPEG-2 video. Chroma samples are aligned horizontally with the luma samples, but are not aligned vertically. The U and V planes are aligned vertically.
Specifies the chroma encoding scheme for MPEG-1 video.
Specifies the chroma encoding scheme for PAL DV video.
Chroma samples are aligned vertically and horizontally with the luma samples. YUV formats such as 4:4:4, 4:2:2, and 4:1:1 are always cosited in both directions and should use this flag.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Specifies the type of copy protection required for a video stream.
-Use these flags with the
No copy protection is required.
Analog copy protection should be applied.
Digital copy protection should be applied.
Contains flags that describe a video stream.
These flags are used in the
Developers are encouraged to use media type attributes instead of using the
Flags | Media Type Attribute |
---|---|
| |
| |
| |
| |
Use the |
?
The following flags were defined to describe per-sample interlacing information, but are obsolete:
Instead, components should use sample attributes to describe per-sample interlacing information, as described in the topic Video Interlacing.
-Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
-Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the upper field appearing on the first line.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the lower field appearing on the first line.
Interlaced frames. Each frame contains one field, with the upper field appearing first.
Interlaced frames. Each frame contains one field, with the lower field appearing first.
The stream contains a mix of interlaced and progressive modes.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the optimal lighting for viewing a particular set of video content.
-This enumeration is used with the
The optimal lighting is unknown.
Bright lighting; for example, outdoors.
Medium brightness; for example, normal office lighting.
Dim; for example, a living room with a television and additional low lighting.
Dark; for example, a movie theater.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Contains flags that are used to configure how the enhanced video renderer (EVR) performs deinterlacing.
-To set these flags, call the
These flags control some trade-offs between video quality and rendering speed. The constants named "MFVideoMixPrefs_Allow..." enable lower-quality settings, but only when the quality manager requests a drop in quality. The constants named "MFVideoMixPrefs_Force..." force the EVR to use lower-quality settings regardless of what the quality manager requests. (For more information about the quality manager, see
Currently two lower-quality modes are supported, as described in the following table. Either is preferable to dropping an entire frame.
Mode | Description |
---|---|
Half interface | The EVR's video mixer skips the second field (relative to temporal order) of each interlaced frame. The video mixer still deinterlaces the first field, and this operation typically interpolates data from the second field. The overall frame rate is unaffected. |
Bob deinterlacing | The video mixer uses bob deinterlacing, even if the driver supports a higher-quality deinterlacing algorithm. |
?
-Force the EVR to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to use bob deinterlacing, even if the driver supports a higher-quality deinterlacing mode.
Force the EVR to use bob deinterlacing, even if the driver supports a higher-quality mode.
The bitmask of valid flag values. This constant is not itself a valid flag. -
Specifies whether to pad a video image so that it fits within a specified aspect ratio.
-Use these flags with the
Do not pad the image.
Pad the image so that it can be displayed in a 4?3 area.
Pad the image so that it can be displayed in a 16?9 area.
Specifies the color primaries of a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
-This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoPrimaries enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-The color primaries are unknown.
Reserved.
ITU-R BT.709. Also used for sRGB and scRGB.
ITU-R BT.470-4 System M (NTSC).
ITU-R BT.470-4 System B,G (NTSC).
SMPTE 170M.
SMPTE 240M.
EBU 3213.
SMPTE C (SMPTE RP 145).
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines algorithms for the video processor which is use by MF_VIDEO_PROCESSOR_ALGORITHM.
-Specifies how to flip a video image.
-Do not flip the image.
Flip the image horizontally.
Flip the image vertically.
Specifies how to rotate a video image.
-Do not rotate the image.
Rotate the image to the correct viewing orientation.
Contains flags that define how the enhanced video renderer (EVR) displays the video.
-To set these flags, call
The flags named "MFVideoRenderPrefs_Allow..." cause the EVR to use lower-quality settings only when requested by the quality manager. (For more information, see
If this flag is set, the EVR does not draw the border color. By default, the EVR draws a border on areas of the destination rectangle that have no video. See
If this flag is set, the EVR does not clip the video when the video window straddles two monitors. By default, if the video window straddles two monitors, the EVR clips the video to the monitor that contains the largest area of video.
Note??Requires Windows?7 or later. ?
Allow the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to batch Direct3D Present calls. This optimization enables the system to enter to idle states more frequently, which can reduce power consumption.
Note??Requires Windows?7 or later. ?
Allow the EVR to batch Direct3D Present calls.
Note??Requires Windows?7 or later. ?
Force the EVR to mix the video inside a rectangle that is smaller than the output rectangle. The EVR will then scale the result to the correct output size. The effective resolution will be lower if this setting is applied.
Note??Requires Windows?7 or later. ?
Allow the EVR to mix the video inside a rectangle that is smaller than the output rectangle.
Note??Requires Windows?7 or later. ?
Prevent the EVR from repainting the video window after a stop command. By default, the EVR repaints the video window black after a stop command.
Describes the rotation of the video image in the counter-clockwise direction.
-This enumeration is used with the
The image is not rotated.
The image is rotated 90 degrees counter-clockwise.
The image is rotated 180 degrees.
The image is rotated 270 degrees counter-clockwise.
Describes the intended aspect ratio for a video stream.
-Use these flags with the
The aspect ratio is unknown.
The source is 16?9 content encoded within a 4?3 area.
The source is 2.35:1 content encoded within a 16?9 or 4?3 area.
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B').
- These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferFunction enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
- Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SPMTE 274M. In addition BT-1361 uses this function within the range [0...1].
SPMTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Logarithmic transfer (100:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Logarithmic transfer (316.22777:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Symmetric ITU-R BT.709.
Note??Requires Windows?7 or later. ?Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'.
-This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferMatrix enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown transfer matrix. Treat as
ITU-R BT.709 transfer matrix.
ITU-R BT.601 transfer matrix. Also used for SMPTE 170 and ITU-R BT.470-2 System B,G.
SMPTE 240M transfer matrix.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines messages for an enhanced video renderer (EVR) presenter. This enumeration is used with the
Contains flags that specify how to convert an audio media type.
-Convert the media type to a
Convert the media type to a
Provides configuration information to the dispatching thread for a callback.
-The GetParameters method returns information about the callback so that the dispatching thread can optimize the process that it uses to invoke the callback.
If the method returns a value other than zero in the pdwFlags parameter, your Invoke method must meet the requirements described here. Otherwise, the callback might delay the pipeline.
If you want default values for both parameters, return E_NOTIMPL. The default values are given in the parameter descriptions on this page.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Receives a flag indicating the behavior of the callback object's
Value | Meaning |
---|---|
| The callback does not take a long time to complete, but has no specific restrictions on what system calls it makes. The callback generally takes less than 30 milliseconds to complete. |
The callback does very minimal processing. It takes less than 1 millisecond to complete. The callback must be invoked from one of the following work queues: | |
Implies The callback must be invoked from one of the following work queues: | |
Blocking callback. | |
Reply callback. |
?
Receives the identifier of the work queue on which the callback is dispatched.
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Creates the default video presenter for the enhanced video renderer (EVR).
-Pointer to the owner of the object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the default video mixer for the enhanced video renderer (EVR).
-Pointer to the owner of this object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the default video mixer and video presenter for the enhanced video renderer (EVR).
-Pointer to the owner of the video mixer. If the mixer is aggregated, pass a reference to the aggregating object's
Pointer to the owner of the video presenter. If the presenter is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface on the video mixer. The caller must release the interface.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an instance of the enhanced video renderer (EVR) media sink.
-Interface identifier (IID) of the requested interface on the EVR.
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function creates the Media Foundation version of the EVR. To create the DirectShow EVR filter, call CoCreateInstance with the class identifier CLSID_EnhancedVideoRenderer.
-Creates a media sample that manages a Direct3D surface.
- A reference to the
Receives a reference to the sample's
If this function succeeds, it returns
The media sample created by this function exposes the following interfaces in addition to
If pUnkSurface is non-
Alternatively, you can set pUnkSurface to
Creates an object that allocates video samples.
-The identifier of the interface to retrieve. Specify one of the following values:
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If the function succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a new instance of the MFPlay player object.
-If this function succeeds, it returns
Before calling this function, call CoIntialize(Ex) from the same thread to initialize the COM library.
Internally,
Creates the ASF Header Object object.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF profile object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an ASF profile object from a presentation descriptor.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor from an ASF profile object.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Splitter.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Multiplexer.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Indexer object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a byte stream to access the index in an ASF stream.
-Pointer to the
Byte offset of the index within the ASF stream. To get this value, call
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The call succeeded. |
| The offset specified in cbIndexStartOffset is invalid. |
?
Creates the ASF stream selector.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF media sink.
-Pointer to a byte stream that will be used to write the ASF stream.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create the ASF media sink.
-Null-terminated wide-character string that contains the output file name.
A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create a Windows Media Video (WMV) encoder.
-A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object that can be used to create a Windows Media Audio (WMA) encoder.
- A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream.
-A reference to a byte stream object in which the ASF media sink writes the streamed content.
Receives a reference to the
If this function succeeds, it returns
To create the ASF streaming sink in another process, call
An application can get a reference to the ASF ContentInfo Object by calling IUnknown::QueryInterface on the media sink object received in the ppIMediaSink parameter. The ContentInfo object is used to set the encoder configuration settings, provide stream properties supplied by an ASF profile, and add metadata information. These configuration settings populate the various ASF header objects of the encoded ASF file. For more information, see - Setting Properties in the ContentInfo Object.
-Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream. The activation object can be used to create the ASF streaming sink in another process.
-A reference to the
A reference to an ASF ContentInfo Object that contains the properties that describe the ASF content. These settings can contain stream settings, encoding properties, and metadata. For more information about these properties, see Setting Properties in the ContentInfo Object.
Receives a reference to the
If this function succeeds, it returns
Starting in Windows?7, Media Foundation provides an ASF streaming sink that writes the content in a live streaming scenario. This function should be used in secure transcode scenarios where this media sink needs to be created and configured in the remote - process. Like the ASF file sink, the new media sink performs ASF related tasks such as writing the ASF header, generating data packets (muxing). The content is written to a caller-implemented byte stream such as an HTTP byte stream. - The caller must also provide an activation object that media sink can use to create the byte stream remotely.
In addition, it performs transcryption for streaming protected content. It hosts the Windows Media Digital Rights Management (DRM) for Network Devices Output Trust Authority (OTA) that handles the license request and response. For more information, see
The new media sink does not perform any time adjustments. If the clock seeks, the timestamps are not changed.
-Initializes Microsoft Media Foundation.
-Version number. Use the value
This parameter is optional when using C++ but required in C. The value must be one of the following flags:
Value | Meaning |
---|---|
| Do not initialize the sockets library. |
| Equivalent to MFSTARTUP_NOSOCKET. |
| Initialize the entire Media Foundation platform. This is the default value when dwFlags is not specified. |
?
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Version parameter requires a newer version of Media Foundation than the version that is running. |
| The Media Foundation platform is disabled because the system was started in "Safe Mode" (fail-safe boot). |
| Media Foundation is not implemented on the system. This error can occur if the media components are not present (See KB2703761 for more info). |
?
An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Blocks the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function prevents work queue threads from being shut down when
This function holds a lock on the Media Foundation platform. To unlock the platform, call
The
The default implementation of the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the Media Foundation platform after it was locked by a call to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue. For more information, see |
| The |
?
This function creates an asynchronous result object and puts the result object on the work queue. The work queue calls the
Puts an asynchronous operation on a work queue, with a specified priority.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
Puts an asynchronous operation on a work queue.
-The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. For more information, see |
| The |
?
To invoke the work-item, this function passes pResult to the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue, with a specified priority.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
To invoke the work item, this function passes pResult to the
Queues a work item that waits for an event to be signaled.
-A handle to an event object. To create an event object, call CreateEvent or CreateEventEx.
The priority of the work item. Work items are performed in order of priority.
A reference to the
Receives a key that can be used to cancel the wait. To cancel the wait, call
If this function succeeds, it returns
This function enables a component to wait for an event without blocking the current thread.
The function puts a work item on the specified work queue. This work item waits for the event given in hEvent to be signaled. When the event is signaled, the work item invokes a callback. (The callback is contained in the result object given in pResult. For more information, see
The work item is dispatched on a work queue by the
Do not use any of the following work queues:
Creates a work queue that is guaranteed to serialize work items. The serial work queue wraps an existing multithreaded work queue. The serial work queue enforces a first-in, first-out (FIFO) execution order.
-The identifier of an existing work queue. This must be either a multithreaded queue or another serial work queue. Any of the following can be used:
Receives an identifier for the new serial work queue. Use this identifier when queuing work items.
This function can return one of these values.
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Multithreaded queues use a thread pool, which can reduce the total number of threads in the pipeline. However, they do not serialize work items. A serial work queue enables the application to get the benefits of the thread pool, without needing to perform manual serialization of its own work items.
-
Schedules an asynchronous operation to be completed after a specified interval.
-Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the timer interval elapses, the timer calls
Schedules an asynchronous operation to be completed after a specified interval.
-Pointer to the
Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates an asynchronous result object. When the timer interval elapses, the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attempts to cancel an asynchronous operation that was scheduled with
If this function succeeds, it returns
Because work items are asynchronous, the work-item callback might still be invoked after
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the timer interval for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Sets a callback function to be called at a fixed interval.
-Pointer to the callback function, of type MFPERIODICCALLBACK.
Pointer to a caller-provided object that implements
Receives a key that can be used to cancel the callback. To cancel the callback, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To get the timer interval for the periodic callback, call
Cancels a callback function that was set by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The callback is dispatched on another thread, and this function does not attempt to synchronize with the callback thread. Therefore, it is possible for the callback to be invoked after this function returns.
-Creates a new work queue. This function extends the capabilities of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| Invalid argument. |
| The application did not call |
?
When you are done using the work queue, call
The
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates a new work queue.
-Receives an identifier for the work queue.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Locks a work queue.
-The identifier for the work queue. The identifier is returned by the
If this function succeeds, it returns
This function prevents the
Call
Note??The
Unlocks a work queue.
-Identifier for the work queue to be unlocked. The identifier is returned by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task.For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task. For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
The base relative priority for the work-queue threads. For more information, see AvSetMmThreadPriority.
A reference to the
A reference to the
If this function succeeds, it returns
This function extends the
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Completes an asynchronous request to associate a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The unique task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
Pointer to the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function unregisters a work queue that was associated with an MMCSS class through the
This function is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class currently associated with this work queue.
-Identifier for the work queue. The identifier is retrieved by the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The pwszClass buffer is too small to receive the task name. |
?
If the work queue is not associated with an MMCSS task, the function retrieves an empty string.
To associate a work queue with an MMCSS task, call
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier currently associated with this work queue.
-Identifier for the work queue. The identifier is retrieved by the
Receives the task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To associate a work queue with an MMCSS task, call
Registers the standard Microsoft Media Foundation platform work queues with the Multimedia Class Scheduler Service (MMCSS). -
-The name of the MMCSS task.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID, or use the value zero to create a new task group. On output, receives the actual task group ID.
The base priority of the work-queue threads.
If this function succeeds, it returns
To unregister the platform work queues from the MMCSS class, call
Unregisters the Microsoft Media Foundation platform work queues from a Multimedia Class Scheduler Service (MMCSS) task.
-If this function succeeds, it returns
Obtains and locks a shared work queue.
-The name of the MMCSS task.
The base priority of the work-queue threads. If the regular-priority queue is being used (wszClass=""), then the value 0 must be passed in.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID , or use the value zero to create a new task group. If the regular priority queue is being used (wszClass=""), then
Receives an identifier for the new work queue. Use this identifier when queuing work items.
If this function succeeds, it returns
A multithreaded work queue uses a thread pool to dispatch work items. Whenever a thread becomes available, it dequeues the next work item from the queue. Work items are dequeued in first-in-first-out order, but work items are not serialized. In other words, the work queue does not wait for a work item to complete before it starts the next work item.
Within a single process, the Microsoft Media Foundation platform creates up to one multithreaded queue for each Multimedia Class Scheduler Service (MMCSS) task. The
The
If the regular priority queue is being used (wszClass=""), then
Gets the relative thread priority of a work queue.
-The identifier of the work queue. For private work queues, the identifier is returned by the
Receives the relative thread priority.
If this function succeeds, it returns
This function returns the relative thread priority set by the
Creates an asynchronous result object. Use this function if you are implementing an asynchronous method.
-Pointer to the object stored in the asynchronous result. This reference is returned by the
Pointer to the
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To invoke the callback specified in pCallback, call the
Invokes a callback method to complete an asynchronous operation.
-Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid work queue. For more information, see |
| The |
?
If you are implementing an asynchronous method, use this function to invoke the caller's
The callback is invoked from a Media Foundation work queue. For more information, see Writing an Asynchronous Method.
The
Creates a byte stream from a file.
- The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string that contains the file name.
Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a byte stream that is backed by a temporary local file.
- The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Receives a reference to the
If this function succeeds, it returns
This function creates a file in the system temporary folder, and then returns a byte stream object for that file. The full path name of the file is storted in the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request to create a byte stream from a file.
-The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string containing the file name.
Pointer to the
Pointer to the
Receives an
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the request is completed, the callback object's
Completes an asynchronous request to create a byte stream from a file.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Cancels an asynchronous request to create a byte stream from a file.
-A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use this function to cancel a previous call to
Allocates system memory and creates a media buffer to manage it.
-Size of the buffer, in bytes.
Receives a reference to the
The function allocates a buffer with a 1-byte memory alignment. To allocate a buffer that is aligned to a larger memory boundary, call
When the media buffer object is destroyed, it releases the allocated memory.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media buffer that wraps an existing media buffer. The new media buffer points to the same memory as the original media buffer, or to an offset from the start of the memory.
-A reference to the
The start of the new buffer, as an offset in bytes from the start of the original buffer.
The size of the new buffer. The value of cbOffset + dwLength must be less than or equal to the size of valid data the original buffer. (The size of the valid data is returned by the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The requested offset or the requested length is not valid. |
?
The maximum size of the wrapper buffer is limited to the size of the valid data in the original buffer. This might be less than the allocated size of the original buffer. To set the size of the valid data, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a Media Foundation media buffer into a buffer that is compatible with DirectX Media Objects (DMOs).
-Pointer to the
Pointer to the
Offset in bytes from the start of the Media Foundation buffer. This offset defines where the DMO buffer starts. If this parameter is zero, the DMO buffer starts at the beginning of the Media Foundation buffer.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid argument. The pIMFMediaBuffer parameter must not be |
?
The DMO buffer created by this function also exposes the
If the Media Foundation buffer specified by pIMFMediaBuffer exposes the
Converts a Microsoft Direct3D?9 format identifier to a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier.
-The D3DFORMAT value or FOURCC code to convert.
Returns a
Converts a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier to a Microsoft Direct3D?9 format identifier.
-The
Returns a D3DFORMAT value or FOURCC code.
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
Unlocks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-If this function succeeds, it returns
Call this function after a successful call to the
Creates a media buffer object that manages a Direct3D 9 surface.
-Identifies the type of Direct3D 9 surface. Currently this value must be IID_IDirect3DSurface9.
A reference to the
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
This function creates a media buffer object that holds a reference to the Direct3D surface specified in punkSurface. Locking the buffer gives the caller access to the surface memory. When the buffer object is destroyed, it releases the surface. For more information about media buffers, see Media Buffers.
Note??This function does not allocate the Direct3D surface itself.? The buffer object created by this function also exposes the
This function does not support DXGI surfaces.
-Creates a media buffer object that manages a Windows Imaging Component (WIC) bitmap.
-Set this parameter to __uuidof(
.
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates a media buffer to manage a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
-Identifies the type of DXGI surface. This value must be IID_ID3D11Texture2D.
A reference to the
The zero-based index of a subresource of the surface. The media buffer object is associated with this subresource.
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
If this function succeeds, it returns
The returned buffer object supports the following interfaces:
Creates an object that allocates video samples that are compatible with Microsoft DirectX Graphics Infrastructure (DXGI).
-The identifier of the interface to retrieve. Specify one of the following values.
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
This function creates an allocator for DXGI video surfaces. The buffers created by this allocator expose the
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
- Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
When you create an
Allocates system memory with a specified byte alignment and creates a media buffer to manage the memory.
-Size of the buffer, in bytes.
Specifies the memory alignment for the buffer. Use one of the following constants.
Value | Meaning |
---|---|
| Align to 1 bytes. |
| Align to 2 bytes. |
| Align to 4 bytes. |
| Align to 8 bytes. |
| Align to 16 bytes. |
| Align to 32 bytes. |
| Align to 64 bytes. |
| Align to 128 bytes. |
| Align to 256 bytes. |
| Align to 512 bytes. |
?
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the media buffer object is destroyed, it releases the allocated memory.
-
Creates a media event object.
-The event type. See
The extended type. See
The event status. See
The value associated with the event, if any. See
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event queue.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates a helper object that you can use to implement the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media sample.
-Receives a reference to the
Initially the sample does not contain any media buffers.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty attribute store.
-Receives a reference to the
The initial number of elements allocated for the attribute store. The attribute store grows as needed.
If this function succeeds, it returns
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
-
Initializes the contents of an attribute store from a byte array.
-Pointer to the
Pointer to the array that contains the initialization data.
Size of the pBuf array, in bytes.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer is not valid. |
?
Use this function to deserialize an attribute store that was serialized with the
This function deletes any attributes that were previously stored in pAttributes.
-
Retrieves the size of the buffer needed for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Use this function to find the size of the array that is needed for the
Converts the contents of an attribute store to a byte array.
-Pointer to the
Pointer to an array that receives the attribute data.
Size of the pBuf array, in bytes. To get the required size of the buffer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer given in pBuf is too small. |
?
The function skips any attributes with
To convert the byte array back into an attribute store, call
To write an attribute store to a stream, call the
Adds information about a Media Foundation transform (MFT) to the registry.
Applications can enumerate the MFT by calling the
If this function succeeds, it returns
The registry entries created by this function are read by the following functions:
Function | Description |
---|---|
| Enumerates MFTs by media type and category. |
| Extended version of |
| Looks up an MFT by CLSID and retrieves the registry information. |
?
This function does not register the CLSID of the MFT for the CoCreateInstance or CoGetClassObject functions.
To remove the entries from the registry, call
The formats given in the pInputTypes and pOutputTypes parameters are intended to help applications search for MFTs by format. Applications can use the
It is recommended to specify at least one input type in pInputTypes and one output type in the pOutputTypes parameter. Otherwise, the MFT might be skipped in the enumeration.
On 64-bit Windows, the 32-bit version of this function registers the MFT in the 32-bit node of the registry. For more information, see 32-bit and 64-bit Application Data in the Registry.
-Unregisters a Media Foundation transform (MFT).
-The CLSID of the MFT.
If this function succeeds, it returns
This function removes the registry entries created by the
It is safe to call
Registers a Media Foundation transform (MFT) in the caller's process.
-A reference to the
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
The pClassFactory parameter specifies a class factory object that creates the MFT. The class factory's IClassFactory::CreateInstance method must return an object that supports the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters one or more Media Foundation transforms (MFTs) from the caller's process.
-A reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the pClassFactory parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
If the pClassFactory parameter is
Registers a Media Foundation transform (MFT) in the caller's process.
-The class identifier (CLSID) of the MFT.
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters a Media Foundation transform (MFT) from the caller's process.
-The class identifier (CLSID) of the MFT.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the clsidMFT parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
Enumerates Media Foundation transforms (MFTs) in the registry.
Starting in Windows?7, applications should use the
If this function succeeds, it returns
This function returns a list of all the MFTs in the specified category that match the search criteria given by the pInputType, pOutputType, and pAttributes parameters. Any of those parameters can be
If no MFTs match the criteria, the method succeeds but returns the value zero in pcMFTs.
-Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets information from the registry about a Media Foundation transform (MFT).
-The CLSID of the MFT.
Receives a reference to a wide-character string containing the friendly name of the MFT. The caller must free the string by calling CoTaskMemFree. This parameter can be
Receives a reference to an array of
Receives the number of elements in the ppInputTypes array. If ppInputTypes is
Receives a reference to an array of
Receives the number of elements in the ppOutputType array. If ppOutputTypes is
Receives a reference to the
This parameter can be
If this function succeeds, it returns
Gets a reference to the Microsoft Media Foundation plug-in manager.
-Receives a reference to the
If this function succeeds, it returns
Gets the merit value of a hardware codec.
-A reference to the
The size, in bytes, of the verifier array.
The address of a buffer that contains one of the following:
Receives the merit value.
If this function succeeds, it returns
The function fails if the MFT does not represent a hardware device with a valid Output Protection Manager (OPM) certificate.
-Registers a scheme handler in the caller's process.
-A string that contains the scheme. The scheme includes the trailing ':' character; for example, "http:".
A reference to the
If this function succeeds, it returns
Scheme handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local scheme handlers take precedence over scheme handlers that are registered in the registry. Local scheme handlers are not visible to other processes.
Use this function if you want to register a custom scheme handler for your application, but do not want the handler available to other applications.
-Registers a byte-stream handler in the caller's process.
-A string that contains the file name extension for this handler.
A string that contains the MIME type for this handler.
A reference to the
If this function succeeds, it returns
Byte-stream handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local byte-stream handlers take precedence over byte-stream handlers that are registered in the registry. Local byte-stream handlers are not visible to other processes.
Use this function if you want to register a custom byte-stream handler for your application, but do not want the handler available to other applications.
Either szFileExtension or szMimeType can be
Creates a wrapper for a byte stream.
-A reference to the
Receives a reference to the
If this function succeeds, it returns
The
Creates an activation object for a Windows Runtime class.
-The class identifier that is associated with the activatable runtime class.
A reference to an optional IPropertySet object, which is used to configure the Windows Runtime class. This parameter can be
The interface identifier (IID) of the interface being requested. The activation object created by this function supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
To create the Windows Runtime object, call
Validates the size of a buffer for a video format block.
-Pointer to a buffer that contains the format block.
Size of the pBlock buffer, in bytes.
The function returns an
Return code | Description |
---|---|
| The buffer that contains the format block is large enough. |
| The buffer that contains the format block is too small, or the format block is not valid. |
| This function does not support the specified format type. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media type.
- Receives a reference to the
If this function succeeds, it returns
The media type is created without any attributes.
-[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates an
If this function succeeds, it returns
Converts a Media Foundation audio media type to a
Pointer to the
Receives a reference to the
Receives the size of the
Contains a flag from the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If the wFormatTag member of the returned structure is
Retrieves the image size for a video format. Given a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
Before calling this function, you must set at least the following members of the
Also, if biCompression is BI_BITFIELDS, the
This function fails if the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the image size, in bytes, for an uncompressed video format.
-Media subtype for the video format. For a list of subtypes, see Media Type GUIDs.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of each frame, in bytes. If the format is compressed or is not recognized, the value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Converts a video frame rate into a frame duration.
-The numerator of the frame rate.
The denominator of the frame rate.
Receives the average duration of a video frame, in 100-nanosecond units.
If this function succeeds, it returns
This function is useful for calculating time stamps on a sample, given the frame rate.
Also, average time per frame is used in the older
For certain common frame rates, the function gets the frame duration from a look-up table:
Frames per second (floating point) | Frames per second (fractional) | Average time per frame |
---|---|---|
59.94 | 60000/1001 | 166833 |
29.97 | 30000/1001 | 333667 |
23.976 | 24000/1001 | 417188 |
60 | 60/1 | 166667 |
30 | 30/1 | 333333 |
50 | 50/1 | 200000 |
25 | 25/1 | 400000 |
24 | 24/1 | 416667 |
?
Most video content uses one of the frame rates listed here. For other frame rates, the function calculates the duration.
-
Calculates the frame rate, in frames per second, from the average duration of a video frame.
-The average duration of a video frame, in 100-nanosecond units.
Receives the numerator of the frame rate.
Receives the denominator of the frame rate.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Average time per frame is used in the older
This function uses a look-up table for certain common durations. The table is listed in the Remarks section for the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes a media type from an
If this function succeeds, it returns
Initializes a media type from a
Pointer to the
Pointer to a
Size of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Compares a full media type to a partial media type.
-Pointer to the
Pointer to the
If the full media type is compatible with the partial media type, the function returns TRUE. Otherwise, the function returns
A pipeline component can return a partial media type to describe a range of possible formats the component might accept. A partial media type has at least a major type
This function returns TRUE if the following conditions are both true:
Otherwise, the function returns
Creates a media type that wraps another media type.
- A reference to the
A
A
Applications can define custom subtype GUIDs.
Receives a reference to the
If this function succeeds, it returns
The original media type (pOrig) is stored in the new media type under the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type that was wrapped in another media type by the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates a video media type from an
If this function succeeds, it returns
Instead of using the
Creates a partial video media type with a specified subtype.
- Pointer to a
Receives a reference to the
If this function succeeds, it returns
This function creates a media type and sets the major type equal to
You can get the same result with the following steps:
Queries whether a FOURCC code or D3DFORMAT value is a YUV format.
-FOURCC code or D3DFORMAT value.
The function returns one of the following values.
Return code | Description |
---|---|
| The value specifies a YUV format. |
| The value does not specify a recognized YUV format. |
?
This function checks whether Format specifies a YUV format. Not every YUV format is recognized by this function. However, if a YUV format is not recognized by this function, it is probably not supported for video rendering or DirectX video acceleration (DXVA).
-This function is not implemented.
-Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Returns E_FAIL.
Calculates the minimum surface stride for a video format.
-FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
Width of the image, in pixels.
Receives the minimum surface stride, in pixels.
If this function succeeds, it returns
This function calculates the minimum stride needed to hold the image in memory. Use this function if you are allocating buffers in system memory. Surfaces allocated in video memory might require a larger stride, depending on the graphics card.
If you are working with a DirectX surface buffer, use the
For planar YUV formats, this function returns the stride for the Y plane. Depending on the format, the chroma planes might have a different stride.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -
Retrieves the image size, in bytes, for an uncompressed video format.
-FOURCC code or D3DFORMAT value that specifies the video format.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of one frame, in bytes. If the format is compressed or is not recognized, this value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is equivalent to the
Creates a video media type from a
If the function succeeds, it returns
Creates a Media Foundation media type from another format representation.
-Description | |
---|---|
AM_MEDIA_TYPE_REPRESENTATION | Convert a DirectShow |
?
Pointer to a buffer that contains the format representation to convert. The layout of the buffer depends on the value of guidRepresentation.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
If the original format is a DirectShow audio media type, and the format type is not recognized, the function sets the following attributes on the converted media type.
Attribute | Description |
---|---|
| Contains the format type |
| Contains the format block. |
?
-[This API is not supported and may be altered or unavailable in the future.]
Creates an audio media type from a
Pointer to a
Receives a reference to the
If this function succeeds, it returns
The
Alternatively, you can call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Returns the FOURCC or D3DFORMAT value for an uncompressed video format.
-Returns a FOURCC or D3DFORMAT value that identifies the video format. If the video format is compressed or not recognized, the return value is D3DFMT_UNKNOWN.
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
This function fills in some reasonable default values for the specified RGB format.
Developers are encouraged to use media type attributes instead of using the
In general, you should avoid calling this function. If you know all of the format details, you can fill in the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Converts the extended color information from an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Sets the extended color information in a
If this function succeeds, it returns
This function sets the following fields in the
Copies an image or image plane from one buffer to another.
-Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Converts an array of 16-bit floating-point numbers into an array of 32-bit floating-point numbers.
-Pointer to an array of float values. The array must contain at least dwCount elements.
Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts dwCount values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Converts an array of 32-bit floating-point numbers into an array of 16-bit floating-point numbers.
-Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Pointer to an array of float values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts the values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Creates a system-memory buffer object to hold 2D image data.
-Width of the image, in pixels.
Height of the image, in pixels.
A FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
This function can return one of these values.
Return code | Description |
---|---|
| Success. |
| Unrecognized video format. |
?
The returned buffer object also exposes the
Allocates a system-memory buffer that is optimal for a specified media type.
-A reference to the
The sample duration. This value is required for audio formats.
The minimum size of the buffer, in bytes. The actual buffer size might be larger. Specify zero to allocate the default buffer size for the media type.
The minimum memory alignment for the buffer. Specify zero to use the default memory alignment.
Receives a reference to the
If this function succeeds, it returns
For video formats, if the format is recognized, the function creates a 2-D buffer that implements the
For audio formats, the function allocates a buffer that is large enough to contain llDuration audio samples, or dwMinLength, whichever is larger.
This function always allocates system memory. For Direct3D surfaces, use the
Creates an empty collection object.
-Receives a reference to the collection object's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allocates a block of memory.
-Number of bytes to allocate.
Zero or more flags. For a list of valid flags, see HeapAlloc in the Windows SDK documentation.
Reserved. Set to
Reserved. Set to zero.
Reserved. Set to eAllocationTypeIgnore.
If the function succeeds, it returns a reference to the allocated memory block. If the function fails, it returns
In the current version of Media Foundation, this function is equivalent to calling the HeapAlloc function and specifying the heap of the calling process.
To free the allocated memory, call
Frees a block of memory that was allocated by calling the
Calculates ((a * b) + d) / c, where each term is a 64-bit signed value.
-A multiplier.
Another multiplier.
The divisor.
The rounding factor.
Returns the result of the calculation. If numeric overflow occurs, the function returns _I64_MAX (positive overflow) or LLONG_MIN (negative overflow). If Mfplat.dll cannot be loaded, the function returns _I64_MAX.
Gets the class identifier for a content protection system.
-The
Receives the class identifier to the content protection system.
If this function succeeds, it returns
The class identifier can be used to create the input trust authority (ITA) for the content protection system. Call CoCreateInstance or
Creates the Media Session in the application's process.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If your application does not play protected content, you can use this function to create the Media Session in the application's process. To use the Media Session for protected content, you must call
You can use the pConfiguration parameter to specify any of the following attributes:
Creates an instance of the Media Session inside a Protected Media Path (PMP) process.
- The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use the pConfiguration parameter to set any of the following attributes:
If this function cannot create the PMP Media Session because a trusted binary was revoked, the ppEnablerActivate parameter receives an
If the function successfully creates the PMP Media Session, the ppEnablerActivate parameter receives the value
Do not make calls to the PMP Media Session from a thread that is processing a window message sent from another thread. To test whether the current thread falls into this category, call InSendMessage.
-Creates the source resolver, which is used to create a media source from a URL or byte stream.
-Receives a reference to the source resolver's
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create property stores.]
Creates an empty property store object.
- Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the URL schemes that are registered for the source resolver.
-Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Retrieves the MIME types that are registered for the source resolver.
-Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology node.
- The type of node to create, specified as a member of the
Receives a reference to the node's
If this function succeeds, it returns
Gets the media type for a stream associated with a topology node.
-A reference to the
The identifier of the stream to query. This parameter is interpreted as follows:
If TRUE, the function gets an output type. If
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream index is invalid. |
?
This function gets the actual media type from the object that is associated with the topology node. The pNode parameter should specify a node that belongs to a fully resolved topology. If the node belongs to a partial topology, the function will probably fail.
Tee nodes do not have an associated object to query. For tee nodes, the function gets the node's input type, if available. Otherwise, if no input type is available, the function gets the media type of the node's primary output stream. The primary output stream is identified by the
Queries an object for a specified service interface.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The service requested cannot be found in the object represented by punkObject. |
?
Returns the system time.
-Returns the system time, in 100-nanosecond units.
Creates the presentation clock. The presentation clock is used to schedule the time at which samples are rendered and to synchronize multiple streams. -
-Receives a reference to the clock's
If this function succeeds, it returns
The caller must shut down the presentation clock by calling
Typically applications do not create the presentation clock. The Media Session automatically creates the presentation clock. To get a reference to the presentation clock from the Media Session, call
Creates a presentation time source that is based on the system time.
-Receives a reference to the object's
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor.
-Number of elements in the apStreamDescriptors array.
Array of
Receives a reference to an
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create the source presentation descriptor. The presentation descriptor is created with no streams selected. Generally, a media source should select at least one stream by default. To select a stream, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a media presentation requires the Protected Media Path (PMP).
-Pointer to the
The function returns an
Return code | Description |
---|---|
| This presentation requires a protected environment. |
| This presentation does not require a protected environment. |
?
If this function returns
If the function returns S_FALSE, you can use the unprotected pipeline. Call
Internally, this function checks whether any of the stream descriptors in the presentation have the
Serializes a presentation descriptor to a byte array.
-Pointer to the
Receives the size of the ppbData array, in bytes.
Receives a reference to an array of bytes containing the serialized presentation descriptor. The caller must free the memory for the array by calling CoTaskMemFree.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To deserialize the presentation descriptor, pass the byte array to the
Deserializes a presentation descriptor from a byte array.
-Size of the pbData array, in bytes.
Pointer to an array of bytes that contains the serialized presentation descriptor.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a stream descriptor.
-Stream identifier.
Number of elements in the apMediaTypes array.
Pointer to an array of
Receives a reference to the
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create stream descriptors for the source. This function automatically creates the stream descriptor media type handler and initializes it with the list of types given in apMediaTypes. The function does not set the current media type on the handler, however. To set the type, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media-type handler that supports a single media type at a time.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media-type handler created by this function supports one media type at a time. Set the media type by calling
Shuts down a Media Foundation object and releases all resources associated with the object.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is not related to the
Creates the Streaming Audio Renderer.
-If this function succeeds, it returns
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
-
Creates an activation object for the Streaming Audio Renderer.
-If this function succeeds, it returns
To create the audio renderer, call
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
-
Creates an activation object for the enhanced video renderer (EVR) media sink.
-Handle to the window where the video will be displayed.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To create the EVR, call
To configure the EVR, set any of the following attributes on the
Attribute | Description |
---|---|
| Activation object for a custom mixer. |
| CLSID for a custom mixer. |
| Flags for creating a custom mixer. |
| Activation object for a custom presenter. |
| CLSID for a custom presenter. |
| Flags for creating a custom presenter. |
?
When
Creates a media sink for authoring MP4 files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
The MP4 media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author 3GP files, use the
Creates a media sink for authoring 3GP files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the 3GP media sink's
If this function succeeds, it returns
The 3GP media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author MP4 files, use the
Creates the MP3 media sink.
-A reference to the
Receives a reference to the
If this function succeeds, it returns
The MP3 media sink takes compressed MP3 - audio samples as input, and writes an MP3 file with ID3 headers as output. The MP3 media sink does not perform MP3 audio encoding.
-Creates an instance of the AC-3 media sink.
-A reference to the
A reference to the
Attribute | Value |
---|---|
| |
|
?
Receives a reference to the
If this function succeeds, it returns
The AC-3 media sink takes compressed AC-3 audio as input and writes the audio to the byte stream without modification. The primary use for this media sink is to stream AC-3 audio over a network. The media sink does not perform AC-3 audio encoding.
-Creates an instance of the audio data transport stream (ADTS) media sink.
-A reference to the
A reference to the
Attribute | Value |
---|---|
| |
| |
| 0 (raw AAC) or 1 (ADTS) |
?
Receives a reference to the
If this function succeeds, it returns
The ADTS media sink converts Advanced Audio Coding (AAC) audio packets into an ADTS stream. The primary use for this media sink is to stream ADTS over a network. The output is not an audio file, but a stream of audio frames with ADTS headers.
The media sink can accept raw AAC frames (
Creates a generic media sink that wraps a multiplexer Microsoft Media Foundation transform (MFT).
-The subtype
A list of format attributes for the MFT output type. This parameter is optional and can be
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function attempts to find a multiplexer MFT that supports an output type with the following definition:
To provide a list of additional format attributes:
The multiplexer MFT must be registered in the
Creates a media sink for authoring fragmented MP4 files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
Creates an Audio-Video Interleaved (AVI) Sink.
-Pointer to the byte stream that will be used to write the AVI file.
Pointer to the media type of the video input stream
Pointer to the media type of the audio input stream
Receives a reference to the
If this function succeeds, it returns
Creates an WAVE archive sink. The WAVE archive sink takes - audio and writes it to an .wav file. -
-Pointer to the byte stream that will be used to write the .wav file.
Pointer to the audio media type.
Receives a reference to the
Creates a new instance of the topology loader.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object for the sample grabber media sink.
- Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Creates the default implementation of the quality manager.
-Receives a reference to the quality manager's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the sequencer source.
-Reserved. Must be
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a
Sequencer element identifier. This value specifies the segment in which to begin playback. The element identifier is returned in the
Starting position within the segment, in 100-nanosecond units.
Pointer to a
If this function succeeds, it returns
The
Creates a media source that aggregates a collection of media sources.
-A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pSourceCollection collection does not contain any elements. |
?
The aggregated media source is useful for combining streams from separate media sources. For example, you can use it to combine a video capture source and an audio capture source.
-
Creates a credential cache object. An application can use this object to implement a custom credential manager.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a default proxy locator.
-The name of the protocol.
Note??In this release of Media Foundation, the default proxy locator does not support RTSP. ?Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the scheme handler for the network source.
-Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface. The scheme handler exposes the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the protected media path (PMP) server object.
-A member of the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the remote desktop plug-in object. Use this object if the application is running in a Terminal Services client session.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Remote desktop connections are not allowed by the current session policy. |
?
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create named property stores.]
Creates an empty property store to hold name/value pairs.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an instance of the sample copier transform.
-Receives a reference to the
If this function succeeds, it returns
The sample copier is a Media Foundation transform (MFT) that copies data from input samples to output samples without modifying the data. The following data is copied from the sample:
This MFT is useful in the following situation:
The following diagram shows this situation with a media source and a media sink.
In order for the media sink to receive data from the media source, the data must be copied into the media samples owned by the media sink. The sample copier can be used for this purpose.
A specific example of such a media sink is the Enhanced Video Renderer (EVR). The EVR allocates samples that contain Direct3D surface buffers, so it cannot receive video samples directly from a media source. Starting in Windows?7, the topology loader automatically handles this case by inserting the sample copier between the media source and the EVR.
-Creates an empty transcode profile object.
The transcode profile stores configuration settings for the output file. These configuration settings are specified by the caller, and include audio and video stream properties, encoder settings, and container settings. To set these properties, the caller must call the appropriate
The configured transcode profile is passed to the
If this function succeeds, it returns
The
For example code that uses this function, see the following topics:
Creates a partial transcode topology.
The underlying topology builder creates a partial topology by connecting the required pipeline objects: - source, encoder, and sink. The encoder and the sink are configured according to the settings specified by the caller in the transcode profile.
To create the transcode profile object, call the
The configured transcode profile is passed to the
The function returns an
Return code | Description |
---|---|
| The function call succeeded, and ppTranscodeTopo receives a reference to the transcode topology. |
| pwszOutputFilePath contains invalid characters. |
| No streams are selected in the media source. |
| The profile does not contain the |
| For one or more streams, cannot find an encoder that accepts the media type given in the profile. |
| The profile does not specify a media type for any of the selected streams on the media source. |
?
For example code that uses this function, see the following topics:
Creates a topology for transcoding to a byte stream.
-A reference to the
A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function creates a partial topology that contains the media source, the encoder, and the media sink.
-Gets a list of output formats from an audio encoder.
-Specifies the subtype of the output media. The encoder uses this value as a filter when it is enumerating the available output types. For information about the audio subtypes, see Audio Subtype GUIDs.
Bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
A reference to the
Value | Meaning |
---|---|
Set this attribute to unlock an encoder that has field-of-use descriptions. | |
Specifies a device conformance profile for a Windows Media encoder. | |
Sets the tradeoff between encoding quality and encoding speed. |
?
Receives a reference to the
This function assumes the encoder will be used in its default encoding mode, which is typically constant bit-rate (CBR) encoding. Therefore, the types returned by the function might not work with other modes, such as variable bit-rate (VBR) encoding.
Internally, this function works by calling
Creates the transcode sink activation object.
The transcode sink activation object can be used to create any of the following file sinks:
The transcode sink activation object exposes the
If this function succeeds, it returns
Creates an
Creates a Microsoft Media Foundation byte stream that wraps an
A reference to the
Receives a reference to the
Returns an
This function enables applications to pass an
Returns an
If this function succeeds, it returns
This function enables an application to pass a Media Foundation byte stream to an API that takes an
Creates a Microsoft Media Foundation byte stream that wraps an IRandomAccessStream object.
-If this function succeeds, it returns
Creates an IRandomAccessStream object that wraps a Microsoft Media Foundation byte stream.
-If this function succeeds, it returns
The returned byte stream object exposes the
Create an
If this function succeeds, it returns
Creates properties from a
If this function succeeds, it returns
Enumerates a list of audio or video capture devices.
-Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Creates a media source for a hardware capture device.
-Pointer to the
Receives a reference to the media source's
If this function succeeds, it returns
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
-Creates an activation object that represents a hardware capture device.
-Pointer to the
Receives a reference to the
This function creates an activation object that can be used to create a media source for a hardware device. To create the media source itself, call
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
-Creates an
Loads a dynamic link library that is signed for the protected environment.
-The name of the dynamic link library to load. This dynamic link library must be signed for the protected environment.
Receives a reference to the
A singlemodule load count is maintained on the dynamic link library (as it is with LoadLibrary). This load count is freed when the final release is called on the
Returns an
Gets the local system ID.
-Application-specific verifier value.
Length in bytes of verifier.
Returned ID string. This value must be freed by the caller by calling CoTaskMemFree.
The function returns an
Creates an
Checks whether a hardware security processor is supported for the specified media protection system.
-The identifier of the protection system that you want to check.
TRUE if the hardware security processor is supported for the specified protection system; otherwise
Creates an
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Creates an instance of the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The supplied |
| The supplied LPCWSTR is null. |
?
Creates the source reader from a URL.
-The URL of a media file to open.
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the source reader from a byte stream.
-A reference to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the source reader from a media source.
-A reference to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source contains protected content. |
?
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the sink writer from a URL or byte stream.
-A null-terminated string that contains the URL of the output file. This parameter can be
Pointer to the
If this parameter is a valid reference, the sink writer writes to the provided byte stream. (The byte stream must be writable.) Otherwise, if pByteStream is
Pointer to the
Receives a reference to the
Call CoInitialize(Ex) and
The first three parameters to this function can be
Description | pwszOutputURL | pByteStream | pAttributes |
---|---|---|---|
Specify a byte stream, with no URL. | non- | Required (must not be | |
Specify a URL, with no byte stream. | not | Optional (may be | |
Specify both a URL and a byte stream. | non- | non- | Optional (may be |
?
The pAttributes parameter is required in the first case and optional in the others.
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the sink writer from a media sink.
-Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
When you are done using the media sink, call the media sink's
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-
Writes the contents of an attribute store to a stream.
-Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If dwOptions contains the
If the
Otherwise, the function calls CoMarshalInterface to serialize a proxy for the object.
If dwOptions does not include the
To load the attributes from the stream, call
The main purpose of this function is to marshal attributes across process boundaries.
-
Loads attributes from a stream into an attribute store.
-Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Use this function to deserialize an attribute store that was serialized with the
If dwOptions contains the
If the
Otherwise, the function calls CoUnmarshalInterface to deserialize a proxy for the object.
This function deletes any attributes that were previously stored in pAttr.
-Creates a generic activation object for Media Foundation transforms (MFTs).
-Receives a reference to the
If this function succeeds, it returns
Most applications will not use this function; it is used internally by the
An activation object is a helper object that creates another object, somewhat similar to a class factory. The
Attribute | Description |
---|---|
| Required. Contains the CLSID of the MFT. The activation object creates the MFT by passing this CLSID to the CoCreateInstance function. |
| Optional. Specifies the category of the MFT. |
| Contains various flags that describe the MFT. For hardware-based MFTs, set the |
| Optional. Contains the merit value of a hardware codec. If this attribute is set and its value is greater than zero, the activation object calls |
| Required for hardware-based MFTs. Specifies the symbolic link for the hardware device. The device proxy uses this value to configure the MFT. |
| Optional. Contains an If this attribute is set and the |
| Optional. Contains the encoding profile for an encoder. The value of this attribute is an If this attribute is set and the value of the |
| Optional. Specifies the preferred output format for an encoder. If this attribute set and the value of the |
?
For more information about activation objects, see Activation Objects.
-Enumerates a list of audio or video capture devices.
-Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Applies to: desktop apps only
Creates an activation object for the sample grabber media sink.
- Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Applies to: desktop apps | Metro style apps
Copies an image or image plane from one buffer to another.
-Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.
-
Uses profile data from a profile object to configure settings in the ContentInfo object.
-If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
-
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
-The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
-Retrieves the size of the header section of an Advanced Systems Format (ASF) file.
-The
Receives the size, in bytes, of the header section of the content. The value includes the size of the ASF Header Object plus the size of the header section of the Data Object. Therefore, the resulting value is the offset to the start of the data packets in the ASF Data Object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer does not contain valid ASF data. |
| The buffer does not contain enough valid data. |
?
The header of an ASF file or stream can be passed to the
Parses the information in an ASF header and uses that information to set values in the ContentInfo object. You can pass the entire header in a single buffer or send it in several pieces.
-Pointer to the
Offset, in bytes, of the first byte in the buffer relative to the beginning of the header.
The method returns an
Return code | Description |
---|---|
| The header is completely parsed and validated. |
| The input buffer does not contain valid ASF data. |
| The input buffer is to small. |
| The method succeeded, but the header passed was incomplete. This is the successful return code for all calls but the last one when passing the header in pieces. |
?
If you pass the header in pieces, the ContentInfo object will keep references to the buffer objects until the entire header is parsed. Therefore, do not write over the buffers passed into this method.
The start of the Header object has the following layout in memory:
Field Name | Size in bytes |
---|---|
Object ID | 16 |
Object Size | 8 |
Number of Header Objects | 4 |
Reserved1 | 1 |
Reserved2 | 1 |
?
The first call to ParseHeader reads everything up to and including Rerserved2, so it requires a minimum of 30 bytes. (Note that the
Encodes the data in the MFASFContentInfo object into a binary Advanced Systems Format (ASF) header.
- A reference to the
Size of the encoded ASF header in bytes. If pIHeader is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF Header Objects do not exist for the media that the ContentInfo object holds reference to. |
| The ASF Header Object size exceeds 10 MB. |
| The buffer passed in pIHeader is not large enough to hold the ASF Header Object information. |
?
The size received in the pcbHeader parameter includes the padding size. The content information shrinks or expands the padding data depending on the size of the ASF Header Objects.
During this call, the stream properties are set based on the encoding properties of the profile. These properties are available through the
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
-Receives an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
-
Uses profile data from a profile object to configure settings in the ContentInfo object.
-The
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
-
Creates a presentation descriptor for ASF content.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a property store that can be used to set encoding properties.
-Stream number to configure. Set to zero to configure file-level encoding properties.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the flags that indicate the selected indexer options.
-You must call this method before initializing the indexer object with
Sets indexer options.
-Bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration specifying the indexer options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object was initialized before setting flags for it. For more information, see Remarks. |
?
Retrieves the flags that indicate the selected indexer options.
-Receives a bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
You must call this method before initializing the indexer object with
Initializes the indexer object. This method reads information in a ContentInfo object about the configuration of the content and the properties of the existing index, if present. Use this method before using the indexer for either writing or reading an index. You must make this call before using any of the other methods of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid ASF data. |
| Unexpected error. |
?
The indexer needs to examine the data in the ContentInfo object to properly write or read the index for the content. The indexer will not make changes to the content information and will not hold any references to the
In the ASF header, the maximum data-packet size must equal the minimum data-packet size. Otherwise, the method returns
Retrieves the offset of the index object from the start of the content.
-Pointer to the
Receives the offset of the index relative to the beginning of the content described by the ContentInfo object. This is the position relative to the beginning of the ASF file.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pIContentInfo is |
?
The index continues from the offset retrieved by this method to the end of the file.
You must call
If the index is retrieved by using more than one call to
Adds byte streams to be indexed.
-An array of
The number of references in the ppIByteStreams array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object has already been initialized and it has packets which have been indexed. |
?
For a reading scenario, only one byte stream should be used by the indexer object. For an index generating scenario, it depends how many index objects are needed to be generated.
-
Retrieves the number of byte streams that are in use by the indexer object.
-Receives the number of byte streams that are in use by the indexer object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pcByteStreams is |
?
Retrieves the index settings for a specified stream and index type.
-Pointer to an
A variable that retrieves a Boolean value specifying whether the index described by pIndexIdentifier has been created.
A buffer that receives the index descriptor. The index descriptor consists of an
On input, specifies the size, in bytes, of the buffer that pbIndexDescriptor points to. The value can be zero if pbIndexDescriptor is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer size specified in pcbIndexDescriptor is too small. |
?
To read an existing ASF index, call
If an index exists for the stream and the value passed into pcbIndexDescriptor is smaller than the required size of the pbIndexDescriptor buffer, the method returns
If there is no index for the specified stream, the method returns
Configures the index for a stream.
-The index descriptor to set. The index descriptor is an
The size, in bytes, of the index descriptor.
A Boolean value. Set to TRUE to have the indexer create an index of the type specified for the stream specified in the index descriptor.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At attempt was made to change the index status in a seek-only scenario. For more information, see Remarks. |
?
You must make all calls to SetIndexStatus before making any calls to
The indexer object is configured to create temporal indexes for each stream by default. Call this method only if you want to override the default settings.
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-Given a desired seek time, gets the offset from which the client should start reading data.
-The value of the index entry for which to get the position. The format of this value varies depending on the type of index, which is specified in the index identifier. For time-based indexing, the variant type is VT_I8 and the value is the desired seek time, in 100-nanosecond units.
Pointer to an
Receives the offset within the data segment of the ASF Data Object. The offset is in bytes, and is relative to the start of packet 0. The offset gives the starting location from which the client should begin reading from the stream. This location might not correspond exactly to the requested seek time.
For reverse playback, if no key frame exists after the desired seek position, this parameter receives the value MFASFINDEXER_READ_FOR_REVERSEPLAYBACK_OUTOFDATASEGMENT. In that case, the seek position should be 1 byte pass the end of the data segment.
Receives the approximate time stamp of the data that is located at the offset returned in the pcbOffsetWithinData parameter. The accuracy of this value is equal to the indexing interval of the ASF index, typically about 1 second.
If the approximate time stamp cannot be determined, this parameter receives the value MFASFINDEXER_APPROX_SEEK_TIME_UNKNOWN.
Receives the payload number of the payload that contains the information for the specified stream. Packets can contain multiple payloads, each containing data for a different stream. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested seek time is out of range. |
| No index exists of the specified type for the specified stream. |
?
Accepts an ASF packet for the file and creates index entries for them.
- Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The argument passed in is |
| The indexer is not initialized. |
?
The ASF indexer creates indexes for a file internally. You can get the completed index for all data packets sent to the indexer by committing the index with
When this method creates index entries, they are immediately available for use by
The media sample specified in pIASFPacketSample must hold a buffer that contains a single ASF packet. Get the sample from the ASF multiplexer by calling the
You cannot use this method while reading an index, only when writing an index.
-
Adds information about a new index to the ContentInfo object associated with ASF content. You must call this method before copying the index to the content so that the index will be readable by the indexer later.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The caller made an invalid request. For more information, see Remarks. |
?
For the index to function properly, you must call this method after all ASF packets in the file have been passed to the indexer by using the
An application must use the CommitIndex method only when writing a new index otherwise CommitIndex may return
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-
Retrieves the size, in bytes, of the buffer required to store the completed index.
-Receives the size of the index, in bytes
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index has not been committed. For more information; see Remarks. |
?
Use this method to get the size of the index and then allocate a buffer big enough to hold it.
The index must be committed with a call to
Call
You cannot use this method in a reading scenario. You can only use this method when writing indexes.
-
Retrieves the completed index from the ASF indexer object.
-Pointer to the
The offset of the data to be retrieved, in bytes from the start of the index data. Set to 0 for the first call. If subsequent calls are needed (the buffer is not large enough to hold the entire index), set to the byte following the last one retrieved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was not committed before attempting to get the completed index. For more information, see Remarks. |
?
This method uses as much of the buffer as possible, and updates the length of the buffer appropriately.
If pIIndexBuffer is large enough to contain the entire buffer, cbOffsetWithinIndex should be 0, and the call needs to be made only once. Otherwise, there should be no gaps between successive buffers.
The user must write this data to the content at cbOffsetFromIndexStart bytes after the end of the ASF data object. You can call
This call will not succeed unless
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-Provides methods to create Advanced Systems Format (ASF) data packets. The methods of this interface process input samples into the packets that make up an ASF data section. The ASF multiplexer exposes this interface. To create the ASF multiplexer, call
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
-The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
-
Initializes the multiplexer with the data from an ASF ContentInfo object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This call must be made once at the beginning of encoding, with pIContentInfo pointing to the ASF ContentInfo object that describes the content to be encoded. This enables the ASF multiplexer to see, among other things, which streams will be present in the encoding session. This call typically does not affect the data in the ASF ContentInfo object.
-
Sets multiplexer options.
-Bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves flags indicating the configured multiplexer options.
-Receives a bitwise OR of zero or more values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Delivers input samples to the multiplexer.
-The stream number of the stream to which the sample belongs.
Pointer to the
The adjustment to apply to the time stamp of the sample. This parameter is used if the caller wants to shift the sample time on pISample. This value should be positive if the time stamp should be pushed ahead and negative if the time stamp should be pushed back. This time stamp is added to sample time on pISample, and the resulting time is used by the multiplexer instead of the original sample time. If no adjustment is needed, set this value to 0.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are too many packets waiting to be retrieved from the multiplexer. Call |
| The sample that was processed violates the bandwidth limitations specified for the stream in the ASF ContentInfo object. When this error is generated, the sample is dropped. |
| The value passed in wStreamNumber is invalid. |
| The presentation time of the input media sample is earlier than the send time. |
?
The application passes samples to ProcessSample, and the ASF multiplexer queues them internally until they are ready to be placed into ASF packets. Call
After each call to ProcessSample, call GetNextPacket in a loop to get all of the available data packets. For a code example, see Generating New ASF Data Packets.
-
Retrieves the next output ASF packet from the multiplexer.
- Receives zero or more status flags. If more than one packet is waiting, the method sets the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The client needs to call this method, ideally after every call to
If no packets are ready, the method returns
Signals the multiplexer to process all queued output media samples. Call this method after passing the last sample to the multiplexer.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call Flush after the last sample has been passed into the ASF multiplexer and before you call
Collects data from the multiplexer and updates the ASF ContentInfo object to include that information in the ASF Header Object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are pending output media samples waiting in the multiplexer. Call |
?
For non-live encoding scenarios (such as encoding to a file), the user should call End to update the specified ContentInfo object, adding data that the multiplexer has collected during the packet generation process. The user should then call
During live encoding, it is usually not possible to rewrite the header, so this call is not required for live encoding. (The header in those cases will simply lack some of the information that was not available until the end of the encoding session.)
-
Retrieves multiplexer statistics.
-The stream number for which to obtain statistics.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
-Synchronization tolerance in milliseconds.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
-Configures an Advanced Systems Format (ASF) mutual exclusion object, which manages information about a group of streams in an ASF profile that are mutually exclusive. When streams or groups of streams are mutually exclusive, only one of them is read at a time, they are not read concurrently.
A common example of mutual exclusion is a set of streams that each include the same content encoded at a different bit rate. The stream that is used is determined by the available bandwidth to the reader.
An
An ASF profile object can support multiple mutual exclusions. Each must be configured using a separate ASF mutual exclusion object.
-
Retrieves the type of mutual exclusion represented by the Advanced Systems Format (ASF) mutual exclusion object.
-A variable that receives the type identifier. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams of several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Sets the type of mutual exclusion that is represented by the Advanced Systems Format (ASF) mutual exclusion object.
-The type of mutual exclusion that is represented by the ASF mutual exclusion object. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams in several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Retrieves the number of records in the Advanced Systems Format mutual exclusion object.
-Receives the count of records.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of streams in every other record.
Use this method in conjunction with
Retrieves the stream numbers contained in a record in the Advanced Systems Format mutual exclusion object.
-The number of the record for which to retrieve the stream numbers.
An array that receives the stream numbers. Set to
On input, the number of elements in the array referenced by pwStreamNumArray. On output, the method sets this value to the count of stream numbers in the record. You can call GetStreamsForRecord with pwStreamNumArray set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds a stream number to a record in the Advanced Systems Format mutual exclusion object.
-The record number to which the stream is added. A record number is set by the
The stream number to add to the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream number is already associated with the record. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of all streams in every other record.
-
Removes a stream number from a record in the Advanced Systems Format mutual exclusion object.
-The record number from which to remove the stream number.
The stream number to remove from the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream number is not listed for the specified record. |
?
Removes a record from the Advanced Systems Format (ASF) mutual exclusion object.
-The index of the record to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a record is removed, the ASF mutual exclusion object indexes the remaining records so that they are sequential starting with zero. You should enumerate the records to ensure that you have the correct index for each record. If the record removed is the one with the highest index, removing it has no effect on the other indexes.
-
Adds a record to the mutual exclusion object. A record specifies streams that are mutually exclusive with the streams in all other records.
-Receives the index assigned to the new record. Record indexes are zero-based and sequential.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A record can include one or more stream numbers. All of the streams in a record are mutually exclusive with all the streams in all other records in the ASF mutual exclusion object.
You can use records to create complex mutual exclusion scenarios by using multiple ASF mutual exclusion objects.
-
Creates a copy of the Advanced Systems Format mutual exclusion object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is a new object, completely independent of the object from which it was cloned.
-
Retrieves the number of streams in the profile.
-
Adds a stream to the profile or reconfigures an existing stream.
-If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
-
Retrieves the number of streams in the profile.
-Receives the number of streams in the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a stream from the profile by stream index, and/or retrieves the stream number for a stream index.
-The index of the stream to retrieve. Stream indexes are sequential and zero-based. You can get the number of streams that are in the profile by calling the
Receives the stream number of the requested stream. Stream numbers are one-based and are not necessarily sequential. This parameter can be set to
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Retrieves an Advanced Systems Format (ASF) stream configuration object for a stream in the profile. This method references the stream by stream number instead of stream index.
-The stream number for which to obtain the interface reference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Adds a stream to the profile or reconfigures an existing stream.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
-
Removes a stream from the Advanced Systems Format (ASF) profile object.
-Stream number of the stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After a stream is removed, the ASF profile object reassigns stream indexes so that the index values are sequential starting from zero. Any previously stored stream index numbers are no longer valid after deleting a stream.
-
Creates an Advanced Systems Format (ASF) stream configuration object.
-Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| ppIStream is |
| stream configuration object could not be created due to insufficient memory. |
?
The ASF stream configuration object created by this method is not included in the profile. To include the stream, you must first configure the stream configuration and then call
Retrieves the number of Advanced Systems Format (ASF) mutual exclusion objects that are associated with the profile.
-Receives the number of mutual exclusion objects.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Multiple mutual exclusion objects may be required for streams that are mutually exclusive in more than one way. For more information, see
Retrieves an Advanced Systems Format (ASF) mutual exclusion object from the profile.
-Index of the mutual exclusion object in the profile.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the mutual exclusion object. The returned reference refers to the mutual exclusion contained in the profile object. You must not make any changes to the mutual exclusion object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the mutual exclusion object in the profile, you must first clone the mutual exclusion object by calling
Adds a configured Advanced Systems Format (ASF) mutual exclusion object to the profile.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can create a mutual exclusion object by calling the
Removes an Advanced Systems Format (ASF) mutual exclusion object from the profile.
-The index of the mutual exclusion object to remove from the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a mutual exclusion object is removed from the profile, the ASF profile object reassigns the mutual exclusion indexes so that they are sequential starting with zero. Any previously stored indexes are no longer valid after calling this method.
-
Creates a new Advanced Systems Format (ASF) mutual exclusion object. Mutual exclusion objects can be added to a profile by calling the AddMutualExclusion method.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ASF mutual exclusion object created by this method is not associated with the profile. Call
Reserved.
If this method succeeds, it returns
Reserved.
If this method succeeds, it returns
If this method succeeds, it returns
Reserved.
Returns E_NOTIMPL.
Creates a copy of the Advanced Systems Format profile object.
-Receives a reference to the
If this method succeeds, it returns
The cloned object is completely independent of the original.
-
Retrieves the option flags that are set on the ASF splitter.
-
Resets the Advanced Systems Format (ASF) splitter and configures it to parse data from an ASF data section.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIContentInfo parameter is |
?
Sets option flags on the Advanced Systems Format (ASF) splitter.
-A bitwise combination of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The splitter is not initialized. |
| The dwFlags parameter does not contain a valid flag. |
| The |
?
This method can only be called after the splitter is initialized.
-
Retrieves the option flags that are set on the ASF splitter.
-Receives the option flags. This value is a bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
Sets the streams to be parsed by the Advanced Systems Format (ASF) splitter.
-An array of variables containing the list of stream numbers to select.
The number of valid elements in the stream number array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pwStreamNumbers is |
| Invalid stream number was passed in the array. |
?
Calling this method supersedes any previous stream selections; only the streams specified in the pwStreamNumbers array will be selected.
By default, no streams are selected by the splitter.
You can obtain a list of the currently selected streams by calling the
Gets a list of currently selected streams.
- The address of an array of WORDs. This array receives the stream numbers of the selected streams. This parameter can be
On input, points to a variable that contains the number of elements in the pwStreamNumbers array. Set the variable to zero if pwStreamNumbers is
On output, receives the number of elements that were copied into pwStreamNumbers. Each element is the identifier of a selected stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The pwStreamNumbers array is smaller than the number of selected streams. See Remarks. |
?
To get the number of selected streams, set pwStreamNumbers to *pwNumStreams
equal to the number of selected streams. Then allocate an array of that size and call the method again, passing the array in the pwStreamNumbers parameter.
The following code shows these steps:
DisplaySelectedStreams( *pSplitter) - { WORD count = 0; hr = pSplitter->GetSelectedStreams( null , &count); if (hr ==) { WORD *pStreamIds = new (std::nothrow) WORD[count]; if (pStreamIds) { hr = pSplitter->GetSelectedStreams(pStreamIds, &count); if (SUCCEEDED(hr)) { for (WORD i = 0; i < count; i++) { printf("Selected stream ID: %d\n", pStreamIds[i]); } } delete [] pStreamIds; } else { hr = E_OUTOFMEMORY; } } return hr; - } -
Alternatively, you can allocate an array that is equal to the total number of streams and pass that to pwStreamNumbers.
Before calling this method, initialize *pwNumStreams
to the number of elements in pwStreamNumbers. If pwStreamNumbers is *pwNumStreams
to zero.
By default, no streams are selected by the splitter. Select streams by calling the
Sends packetized Advanced Systems Format (ASF) data to the ASF splitter for processing.
-Pointer to the
The offset into the data buffer where the splitter should begin parsing. This value is typically set to 0.
The length, in bytes, of the data to parse. This value is measured from the offset specified by cbBufferOffset. Set to 0 to process to the end of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIBuffer parameter is The specified offset value in cbBufferOffset is greater than the length of the buffer. The total value of cbBufferOffset and cbLength is greater than the length of the buffer. |
| The |
| The splitter cannot process more input at this time. |
?
After using this method to parse data, you must call
If your ASF data contains variable-sized packets, you must set the
If the method returns ME_E_NOTACCEPTING, call GetNextSample to get the output samples, or call
The splitter might hold a reference count on the input buffer. Therefore, do not write over the valid data in the buffer after calling this method.
-
Retrieves a sample from the Advanced Systems Format (ASF) splitter after the data has been parsed.
-Receives one of the following values.
Value | Meaning |
---|---|
More samples are ready to be retrieved. Call GetNextSample in a loop until the pdwStatusFlags parameter receives the value zero. | |
| No additional samples are ready. Call |
?
If the method returns a sample in the ppISample parameter, this parameter receives the number of the stream to which the sample belongs.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF data in the buffer is invalid. |
| There is a gap in the ASF data. |
?
Before calling this method, call
The ASF splitter skips samples for unselected streams. To select streams, call
Resets the Advanced Systems Format (ASF) splitter and releases all pending samples.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Any samples waiting to be retrieved when Flush is called are lost.
-
Retrieves the send time of the last sample received.
-Receives the send time of the last sample received.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwLastSendTime is |
?
Retrieves information about an existing payload extension.
-
Retrieves the stream number of the stream.
-
Retrieves the media type of the stream.
-To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
-Gets the major media type of the stream.
-Receives the major media type for the stream. For a list of possible values, see Major Media Types.
If this method succeeds, it returns
Retrieves the stream number of the stream.
-The method returns the stream number.
Assigns a stream number to the stream.
-The number to assign to the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Stream numbers start from 1 and do not need to be sequential.
-
Retrieves the media type of the stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
-
Sets the media type for the Advanced Systems Format (ASF) stream configuration object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some validation of the media type is performed by this method. However, a media type can be successfully set, but cause an error when the stream is added to the profile.
-
Retrieves the number of payload extensions that are configured for the stream.
-Receives the number of payload extensions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves information about an existing payload extension.
-The payload extension index. Valid indexes range from 0, to one less than the number of extensions obtained by calling
Receives a
Receives the number of bytes added to each sample for the extension.
Pointer to a buffer that receives information about this extension system. This information is the same for all samples and is stored in the content header (not in each sample). This parameter can be
On input, specifies the size of the buffer pointed to by pbExtensionSystemInfo. On output, receives the required size of the pbExtensionSystemInfo buffer in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The buffer specified in pbExtensionSystemInfo is too small. |
| The wPayloadExtensionNumber parameter is out of range. |
?
Configures a payload extension for the stream.
-Pointer to a
Number of bytes added to each sample for the extension.
A reference to a buffer that contains information about this extension system. This information is the same for all samples and is stored in the content header (not with each sample). This parameter can be
Amount of data, in bytes, that describes this extension system. If this value is 0, then pbExtensionSystemInfo can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all payload extensions that are configured for the stream.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
None.
-
Creates a copy of the Advanced Systems Format (ASF) stream configuration object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is completely independent of the original.
-Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
-The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
-Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the number of entries in the stream priority list.
-Receives the number of streams in the stream priority list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the stream number of a stream in the stream priority list.
-Zero-based index of the entry to retrieve from the stream priority list. To get the number of entries in the priority list, call
Receives the stream number of the stream priority entry.
Receives a Boolean value. If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
-Stream number of the stream to add.
If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
-Note??This interface is not implemented in this version of Media Foundation.?
Removes a stream from the stream priority list.
-Index of the entry in the stream priority list to remove. Values range from zero, to one less than the stream count retrieved by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a stream is removed from the stream priority list, the index values of all streams that follow it in the list are decremented.
-Note??This interface is not implemented in this version of Media Foundation.?
Creates a copy of the ASF stream prioritization object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The new object is completely independent of the original.
-
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
-Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Sets options for the stream selector.
-
Retrieves the number of streams that are in the Advanced Systems Format (ASF) content.
-Receives the number of streams in the content.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of outputs for the Advanced Systems Format (ASF) content.
-Receives the number of outputs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Retrieves the number of streams associated with an output.
-The output number for which to retrieve the stream count.
Receives the number of streams associated with the output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
-
Retrieves the stream numbers for all of the streams that are associated with an output.
-The output number for which to retrieve stream numbers.
Address of an array that receives the stream numbers associated with the output. The caller allocates the array. The array size must be at least as large as the value returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
-
Retrieves the output number associated with a stream.
-The stream number for which to retrieve an output number.
Receives the output number.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Retrieves the manual output override selection that is set for a stream.
-Stream number for which to retrieve the output override selection.
Receives the output override selection. The value is a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the selection status of an output, overriding other selection criteria.
-Output number for which to set selection.
Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of mutual exclusion objects associated with an output.
-Output number for which to retrieve the count of mutually exclusive relationships.
Receives the number of mutual exclusions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a mutual exclusion object for an output.
-Output number for which to retrieve a mutual exclusion object.
Mutual exclusion number. This is an index of mutually exclusive relationships associated with the output. Set to a number between 0, and 1 less than the number of mutual exclusion objects retrieved by calling
Receives a reference to the mutual exclusion object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Selects a mutual exclusion record to use for a mutual exclusion object associated with an output.
-The output number for which to set a stream.
Index of the mutual exclusion for which to select.
Record of the specified mutual exclusion to select.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An output is a stream in an Advanced Systems Format (ASF) data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
An ASF file can contain multiple mutually exclusive relationships, such as a file with both language based and bit-rate based mutual exclusion. If an output is involved in multiple mutually exclusive relationships, a record from each must be selected.
-
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
-Receives the number of bandwidth steps.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Retrieves the stream numbers that apply to a bandwidth step. This method is used for multiple bit rate (MBR) content.
-Bandwidth step number for which to retrieve information. Set this value to a number between 0, and 1 less than the number of bandwidth steps returned by
Receives the bit rate associated with the bandwidth step.
Address of an array that receives the stream numbers. The caller allocates the array. The array size must be at least as large as the value returned by the
Address of an array that receives the selection status of each stream, as an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for MBR content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Retrieves the index of a bandwidth step that is appropriate for a specified bit rate. This method is used for multiple bit rate (MBR) content.
-The bit rate to find a bandwidth step for.
Receives the step number. Use this number to retrieve information about the step by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a streaming multiple bit rate (MBR) scenario, call this method with the current data rate of the network connection to determine the correct step to use. You can also call this method periodically throughout streaming to ensure that the best step is used.
-
Sets options for the stream selector.
-Bitwise OR of zero or more members of the MFASF_STREAMSELECTOR_FLAGS enumeration specifying the options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
[
Represents a description of an audio format.
-Windows Server?2008 and Windows?Vista:??If the major type of a media type is
To convert an audio media type into a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
This method returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Configures the audio session that is associated with the streaming audio renderer (SAR). Use this interface to change how the audio session appears in the Windows volume control.
The SAR exposes this interface as a service. To get a reference to the interface, call
Retrieves the group of sessions to which this audio session belongs.
-If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Assigns the audio session to a group of sessions.
-A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Retrieves the group of sessions to which this audio session belongs.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Sets the display name of the audio session. The Windows volume control displays this name.
-A null-terminated wide-character string that contains the display name.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
-
Retrieves the display name of the audio session. The Windows volume control displays this name.
-Receives a reference to the display name string. The caller must free the memory allocated for the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
-Sets the icon resource for the audio session. The Windows volume control displays this icon.
-A wide-character string that specifies the icon. See Remarks.
If this method succeeds, it returns
The icon path has the format "path,index" or "path,-id", where path is the fully qualified path to a DLL, executable file, or icon file; index is the zero-based index of the icon within the file; and id is a resource identifier. Note that resource identifiers are preceded by a minus sign (-) to distinguish them from indexes. The path can contain environment variables, such as "%windir%". For more information, see IAudioSessionControl::SetIconPath in the Windows SDK.
-
Retrieves the icon resource for the audio session. The Windows volume control displays this icon.
-Receives a reference to a wide-character string that specifies a shell resource. The format of the string is described in the topic
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application did not set an icon path, the method returns an empty string ("").
For more information, see IAudioSessionControl::GetIconPath in the core audio API documentation.
-Controls the volume levels of individual audio channels.
The streaming audio renderer (SAR) exposes this interface as a service. To get a reference to the interface, call
If your application does not require channel-level volume control, you can use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuation for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels.
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
-
Retrieves the number of channels in the audio stream.
-
Retrieves the number of channels in the audio stream.
-Receives the number of channels in the audio stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the volume level for a specified channel in the audio stream.
-Zero-based index of the audio channel. To get the number of channels, call
Volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume level for a specified channel in the audio stream.
-Zero-based index of the audio channel. To get the number of channels, call
Receives the volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the individual volume levels for all of the channels in the audio stream.
-Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The array specifies the volume levels for all of the channels. Before calling the method, set each element of the array to the desired volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume levels for all of the channels in the audio stream.
-Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The method fills the array with the volume level for each channel in the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a buffer that contains a two-dimensional surface, such as a video frame.
-To get a reference to this interface, call QueryInterface on the media buffer.
To use a 2-D buffer, it is important to know the stride, which is the number of bytes needed to go from one row of pixels to the next. The stride may be larger than the image width, because the surface may contain padding bytes after each row of pixels. Stride can also be negative, if the pixels are oriented bottom-up in memory. For more information, see Image Stride.
Every video format defines a contiguous or packed representation. This representation is compatible with the standard layout of a DirectX surface in system memory, with no additional padding. For RGB video, the contiguous representation has a pitch equal to the image width in bytes, rounded up to the nearest DWORD boundary. For YUV video, the layout of the contiguous representation depends on the YUV format. For planar YUV formats, the Y plane might have a different pitch than the U and V planes.
If a media buffer supports the
Call the Lock2D method to access the 2-D buffer in its native format. The native format might not be contiguous. The buffer's
For uncompressed images, the amount of valid data in the buffer is determined by the width, height, and pixel layout of the image. For this reason, if you call Lock2D to access the buffer, do not rely on the values returned by
Queries whether the buffer is contiguous in its native format.
-For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
-For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Gives the caller access to the memory in the buffer.
-Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot lock the Direct3D surface. |
| The buffer cannot be locked at this time. |
?
If p is a reference to the first byte in a row of pixels, p + (*plPitch) points to the first byte in the next row of pixels. A buffer might contain padding after each row of pixels, so the stride might be wider than the width of the image in bytes. Do not access the memory that is reserved for padding bytes, because it might not be read-accessible or write-accessible. For more information, see Image Stride.
The reference returned in pbScanline0 remains valid as long as the caller holds the lock. When you are done accessing the memory, call
The values returned by the
The
When the underlying buffer is a Direct3D surface, the method fails if the surface is not lockable.
-
Unlocks a buffer that was previously locked. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the buffer memory and the surface stride.
-Receives a reference to the first byte of the top row of pixels in the image.
Receives the stride, in bytes. For more information, see Image Stride.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| You must lock the buffer before calling this method. |
?
Before calling this method, you must lock the buffer by calling
Queries whether the buffer is contiguous in its native format.
-Receives a Boolean value. The value is TRUE if the buffer is contiguous, and
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
-Receives the number of bytes needed to store the contents of the buffer in contiguous format.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies this buffer into the caller's buffer, converting the data to contiguous format.
-Pointer to the destination buffer where the data will be copied. The caller allocates the buffer.
Size of the destination buffer, in bytes. To get the required size, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid size specified in pbDestBuffer. |
?
If the original buffer is not contiguous, this method converts the contents into contiguous format during the copy. For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies data to this buffer from a buffer that has a contiguous format.
-Pointer to the source buffer. The caller allocates the buffer.
Size of the source buffer, in bytes. To get the maximum size of the buffer, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method copies the contents of the source buffer into the buffer that is managed by this
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in the
Represents a buffer that contains a two-dimensional surface, such as a video frame.
-This interface extends the
Gives the caller access to the memory in the buffer.
-A member of the
Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
Receives a reference to the start of the accessible buffer in memory.
Receives the length of the buffer, in bytes.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. The buffer might already be locked with an incompatible locking flag. See Remarks. |
| There is insufficient memory to complete the operation. |
?
When you are done accessing the memory, call
This method is equivalent to the
The ppbBufferStart and pcbBufferLength parameters receive the bounds of the buffer memory. Use these values to guard against buffer overruns. Use the values of ppbScanline0 and plPitch to access the image data. If the image is bottom-up in memory, ppbScanline0 will point to the last scan line in memory and plPitch will be negative. For more information, see Image Stride.
The lockFlags parameter specifies whether the buffer is locked for read-only access, write-only access, or read/write access.
When possible, use a read-only or write-only lock, and avoid locking the buffer for read/write access. If the buffer represents a DirectX Graphics Infrastructure (DXGI) surface, a read/write lock can cause an extra copy between CPU memory and GPU memory.
-Copies the buffer to another 2D buffer object.
-A reference to the
If this method succeeds, it returns
The destination buffer must be at least as large as the source buffer.
-Enables
Indicates that a
Indicates that a
Controls how a byte stream buffers data from a network.
To get a reference to this interface, call QueryInterface on the byte stream object.
-If a byte stream implements this interface, a media source can use it to control how the byte stream buffers data. This interface is designed for byte streams that read data from a network.
A byte stream that implements this interface should also implement the
The byte stream must send a matching
After the byte stream sends an
The byte stream should not send any more buffering events after it reaches the end of the file.
If buffering is disabled, the byte stream does not send any buffering events. Internally, however, it might still buffer data while it waits for I/O requests to complete. Therefore,
If the byte stream is buffering data internally and the media source calls EnableBuffering with the value TRUE, the byte stream can send
After the presentation has started, the media source should forward and
Sets the buffering parameters.
-
Sets the buffering parameters.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables or disables buffering.
-Specifies whether the byte stream buffers data. If TRUE, buffering is enabled. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call
Stops any buffering that is in progress.
-The method returns an
Return code | Description |
---|---|
| The byte stream successfully stopped buffering. |
| No buffering was in progress. |
?
If the byte stream is currently buffering data, it stops and sends an
Controls how a network byte stream transfers data to a local cache. Optionally, this interface is exposed by byte streams that read data from a network, for example, through HTTP.
To get a reference to this interface, call QueryInterface on the byte stream object.
-Stops the background transfer of data to the local cache.
-If this method succeeds, it returns
The byte stream resumes transferring data to the cache if the application does one of the following:
Controls how a network byte stream transfers data to a local cache. This interface extends the
Byte streams object in Microsoft Media Foundation can optionally implement this interface. To get a reference to this interface, call QueryInterface on the byte stream object.
-Limits the cache size.
-Queries whether background transfer is active.
-Background transfer might stop because the cache limit was reached (see
Gets the ranges of bytes that are currently stored in the cache.
-Receives the number of ranges returned in the ppRanges array.
Receives an array of
If this method succeeds, it returns
Limits the cache size.
-The maximum number of bytes to store in the cache, or ULONGLONG_MAX for no limit. The default value is no limit.
If this method succeeds, it returns
Queries whether background transfer is active.
-Receives the value TRUE if background transfer is currently active, or
If this method succeeds, it returns
Background transfer might stop because the cache limit was reached (see
Creates a media source from a byte stream.
-Applications do not use this interface directly. This interface is exposed by byte-stream handlers, which are used by the source resolver. When the byte-stream handler is given a byte stream, it parses the stream and creates a media source. Byte-stream handlers are registered by file name extension or MIME type.
-
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
-
Begins an asynchronous request to create a media source from a byte stream.
-Pointer to the byte stream's
String that contains the original URL of the byte stream. This parameter can be
Bitwise OR of zero or more flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Unable to parse the byte stream. |
?
The dwFlags parameter must contain the
The byte-stream handler is responsible for parsing the stream and validating the contents. If the stream is not valid or the byte stream handler cannot parse the stream, the handler should return a failure code. The byte stream is not guaranteed to match the type of stream that the byte handler is designed to parse.
If the pwszURL parameter is not
When the operation completes, the byte-stream handler calls the
Completes an asynchronous request to create a media source.
-Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. See |
| Unable to parse the byte stream. |
?
Call this method from inside the
Cancels the current request to create a media source.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
-
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
-Receives the maximum number of bytes that are required.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
-Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
-A reference to the
Reserved. Set to
The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
If this method succeeds, it returns
Seeks a byte stream by time position.
-A byte stream can implement this interface if it supports time-based seeking. For example, a byte stream that reads data from a server might implement the interface. Typically, a local file-based byte stream would not implement it.
To get a reference to this interface, call QueryInterface on the byte stream object.
-Queries whether the byte stream supports time-based seeking.
-Queries whether the byte stream supports time-based seeking.
-Receives the value TRUE if the byte stream supports time-based seeking, or
If this method succeeds, it returns
Seeks to a new position in the byte stream.
-The new position, in 100-nanosecond units.
If this method succeeds, it returns
If the byte stream reads from a server, it might cache the seek request until the next read request. Therefore, the byte stream might not send a request to the server immediately.
-Gets the result of a time-based seek.
-Receives the new position after the seek, in 100-nanosecond units.
Receives the stop time, in 100-nanosecond units. If the stop time is unknown, the value is zero.
Receives the total duration of the file, in 100-nanosecond units. If the duration is unknown, the value is ?1.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The byte stream does not support time-based seeking, or no data is available. |
?
This method returns the server response from a previous time-based seek.
Note??This method normally cannot be invoked until some data is read from the byte stream, because theExtends the
Dynamically sets the output media type of the record sink or preview sink.
-The stream index to change the output media type on.
The new output media type.
The new encoder attributes. This can be null.
The method returns an
Return code | Description |
---|---|
| The method succeeded |
| The sink does not support the media type. |
?
This is an asynchronous call. Listen to the MF_CAPTURE_ENGINE_OUTPUT_MEDIA_TYPE_SET event - to be notified when the output media type has been set.
-Controls the capture source object. The capture source manages the audio and video capture devices.
-To get a reference to the capture source, call
Gets the number of device streams.
-Gets the current capture device's
If this method succeeds, it returns
Gets the current capture device's
If this method succeeds, it returns
Gets a reference to the underlying Source Reader object.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
| The capture source was not initialized. Possibly there is no capture device on the system. |
?
Adds an effect to a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| No compatible media type could be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
The effect must be implemented as a Media Foundation Transform (MFT). The pUnknown parameter can point to an instance of the MFT, or to an activation object for the MFT. For more information, see Activation Objects.
The effect is applied to the stream before the data reaches the capture sinks.
-Removes an effect from a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. Possibly the specified effect could not be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method removes an effect that was previously added using the
Removes all effects from a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets a format that is supported by one of the capture streams.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
| The dwMediaTypeIndex parameter is out of range. |
?
To enumerate all of the available formats on a stream, call this method in a loop while incrementing dwMediaTypeIndex, until the method returns
Some cameras might support a range of frame rates. The minimum and maximum frame rates are stored in the
Sets the output format for a capture stream.
-The capture stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method sets the native output type on the capture device. The device must support the specified format. To get the list of available formats, call
Gets the current media type for a capture stream.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets the number of device streams.
-Receives the number of device streams.
If this method succeeds, it returns
Gets the stream category for the specified source stream index.
-The index of the source stream.
Receives the
If this method succeeds, it returns
Gets the current mirroring state of the video preview stream.
-The zero-based index of the stream.
Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
-The zero-based index of the stream.
If TRUE, mirroring is enabled; if
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The device stream does not have mirroring capability. |
| The source is not initialized. |
?
Gets the actual device stream index translated from a friendly stream name.
-The friendly name. Can be one of the following:
Receives the value of the stream index that corresponds to the friendly name.
If this method succeeds, it returns
Used to enable the client to notify the Content Decryption Module (CDM) when global resources should be brought into a consistent state prior to suspending. -
-Indicates that the suspend process is starting and resources should be brought into a consistent state.
-If this method succeeds, it returns
The actual suspend is about to occur and no more calls will be made into the Content Decryption Module (CDM).
-If this method succeeds, it returns
Provides timing information from a clock in Microsoft Media Foundation.
Clocks and some media sinks expose this interface through QueryInterface.
-The
Retrieves the characteristics of the clock.
-
Retrieves the clock's continuity key. (Not supported.)
-Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
-
Retrieves the properties of the clock.
-
Retrieves the characteristics of the clock.
-Receives a bitwise OR of values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the last clock time that was correlated with system time.
-Reserved, must be zero.
Receives the last known clock time, in units of the clock's frequency.
Receives the system time that corresponds to the clock time returned in pllClockTime, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a time source. |
?
At some fixed interval, a clock correlates its internal clock ticks with the system time. (The system time is the time returned by the high-resolution performance counter.) This method returns:
The clock time is returned in the pllClockTime parameter and is expressed in units of the clock's frequency. If the clock's
The system time is returned in the phnsSystemTime parameter, and is always expressed in 100-nanosecond units.
To find out how often the clock correlates its clock time with the system time, call GetProperties. The correlation interval is given in the qwCorrelationRate member of the
Some clocks support rate changes through the
For the presentation clock, the clock time is the presentation time, and is always relative to the starting time specified in
Retrieves the clock's continuity key. (Not supported.)
-Receives the continuity key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
-
Retrieves the current state of the clock.
-Reserved, must be zero.
Receives the clock state, as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the properties of the clock.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a media source or a byte stream from a URL.
-Applications do not use this interface. This interface is exposed by scheme handlers, which are used by the source resolver. A scheme handler is designed to parse one type of URL scheme. When the scheme handler is given a URL, it parses the resource that is located at that URL and creates either a media source or a byte stream.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called by the media pipeline to provide the app with an instance of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ICollection interface is the base interface for classes in the System.Collections namespace.
The ICollection interface extends IEnumerable; IDictionary and IList are more specialized interfaces that extend ICollection. An IDictionary implementation is a collection of key/value pairs, like the Hashtable class. An IList implementation is a collection of values and its members can be accessed by index, like the ArrayList class.
Some collections that limit access to their elements, such as the Queue class and the Stack class, directly implement the ICollection interface.
If neither the IDictionary interface nor the IList interface meet the requirements of the required collection, derive the new collection class from the ICollection interface instead for more flexibility.
For the generic version of this interface, see System.Collections.Generic.ICollection.
Windows 98, Windows Server 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter Edition
The Microsoft .NET Framework 3.0 is supported on Windows Vista, Microsoft Windows XP SP2, and Windows Server 2003 SP1. .NET FrameworkSupported in: 3.0, 2.0, 1.1, 1.0.NET Compact FrameworkSupported in: 2.0, 1.0XNA FrameworkSupported in: 1.0ReferenceICollection MembersSystem.Collections NamespaceIDictionaryIListSystem.Collections.Generic.ICollection -
Retrieves the number of objects in the collection.
-
Retrieves the number of objects in the collection.
-Receives the number of objects in the collection.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves an object in the collection.
-Zero-based index of the object to retrieve. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the object's
This method does not remove the object from the collection. To remove an object, call
Adds an object to the collection.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If pUnkElement is
Removes an object from the collection.
-Zero-based index of the object to remove. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds an object at the specified index in the collection.
-The zero-based index where the object will be added to the collection.
The object to insert.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all items from the collection.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allows a decryptor to manage hardware keys and decrypt hardware samples.
-Allows the display driver to return IHV-specific information used when initializing a new hardware key.
-The number of bytes in the buffer that InputPrivateData specifies.
The contents of this parameter are defined by the implementation of the protection system that runs in the security processor. The contents may contain data about license or stream properties.
The return data is also defined by the implementation of the protection system implementation that runs in the security processor. The contents may contain data associated with the underlying hardware key.
If this method succeeds, it returns
Implements one step that must be performed for the user to access media content. For example, the steps might be individualization followed by license acquisition. Each of these steps would be encapsulated by a content enabler object that exposes the
Retrieves the type of operation that this content enabler performs.
-The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
-
Queries whether the content enabler can perform all of its actions automatically.
-If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Retrieves the type of operation that this content enabler performs.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
-
Retrieves a URL for performing a manual content enabling action.
-Receives a reference to a buffer that contains the URL. The caller must release the memory for the buffer by calling CoTaskMemFree.
Receives the number of characters returned in ppwszURL, including the terminating
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is available. |
?
If the enabling action can be performed by navigating to a URL, this method returns the URL. If no such URL exists, the method returns a failure code.
The purpose of the URL depends on the content enabler type, which is obtained by calling
Enable type | Purpose of URL |
---|---|
Individualization | Not applicable. |
License acquisition | URL to obtain the license. Call |
Revocation | URL to a webpage where the user can download and install an updated component. |
?
-
Retrieves the data for a manual content enabling action.
-Receives a reference to a buffer that contains the data. The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No data is available. |
?
The purpose of the data depends on the content enabler type, which is obtained by calling
Enable type | Purpose of data |
---|---|
Individualization | Not applicable. |
License acquisition | HTTP POST data. |
Revocation | |
?
-
Queries whether the content enabler can perform all of its actions automatically.
-Receives a Boolean value. If TRUE, the content enabler can perform the enabing action automatically.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Performs a content enabling action without any user interaction.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation is complete, the content enabler sends an
To find out whether the content enabler supports this method, call
Requests notification when the enabling action is completed.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method succeeded and no action was required. |
?
If you use a manual enabling action, call this method to be notified when the operation completes. If this method returns
You do not have to call MonitorEnable when you use automatic enabling by calling
Cancels a pending content enabling action.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The content enabler sends an
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
-Calls into the implementation of the protection system in the security processor.
-The identifier of the function that you want to run. This identifier is defined by the implementation of the protection system.
The number of bytes of in the buffer that InputBuffer specifies, including private data.
A reference to the data that you want to provide as input.
Pointer to a value that specifies the length in bytes of the data that the function wrote to the buffer that OutputBuffer specifies, including the private data.
Pointer to the buffer where you want the function to write its output.
If this method succeeds, it returns
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
-If this method succeeds, it returns
Enables playback of protected content by providing the application with a reference to a content enabler object.
Applications that play protected content should implement this interface.
-A content enabler is an object that performs some action that is required to play a piece of protected content. For example, the action might be obtaining a DRM license. Content enablers expose the
To use this interface, do the following:
Implement the interface in your application.
Create an attribute store by calling
Set the
Call
If the content requires a content enabler, the application's BeginEnableContent method is called. Usually this method called during the
Many content enablers send machine-specific data to the network, which can have privacy implications. One of the purposes of the
Begins an asynchronous request to perform a content enabling action.
This method requests the application to perform a specific step needed to acquire rights to the content, using a content enabler object.
- Pointer to the
Pointer to the
Pointer to the
Reserved. Currently this parameter is always
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Do not block within this callback method. Instead, perform the content enabling action asynchronously on another thread. When the operation is finished, notify the protected media path (PMP) through the pCallback parameter.
If you return a success code from this method, you must call Invoke on the callback. Conversely, if you return an error code from this method, you must not call Invoke. If the operation fails after the method returns a success code, use status code on the
After the callback is invoked, the PMP will call the application's
This method is not necessarily called every time the application plays protected content. Generally, the method will not be called if the user has a valid, up-to-date license for the content. Internally, the input trust authority (ITA) determines whether BeginEnableContent is called, based on the content provider's DRM policy. For more information, see Protected Media Path.
-
Ends an asynchronous request to perform a content enabling action. This method is called by the protected media path (PMP) to complete an asynchronous call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the BeginEnableContent method completes asynchronously, the application notifies the PMP by invoking the asynchronous callback. The PMP calls EndEnableContent on the application to get the result code. This method is called on the application's thread from inside the callback method. Therefore, it must not block the thread that invoked the callback.
The application must return the success or failure code of the asynchronous processing that followed the call to BeginEnableContent.
-Enables the presenter for the enhanced video renderer (EVR) to request a specific frame from the video mixer.
The sample objects created by the
Called by the mixer to get the time and duration of the sample requested by the presenter.
-Receives the desired sample time that should be mixed.
Receives the sample duration that should be mixed.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time stamp was set for this sample. See |
?
Called by the presenter to set the time and duration of the sample that it requests from the mixer.
-The time of the requested sample.
The duration of the requested sample.
This value should be set prior to passing the buffer to the mixer for a Mix operation. The mixer sets the actual start and duration times on the sample before sending it back.
-
Clears the time stamps previously set by a call to
After this method is called, the
This method also clears the time stamp and duration and removes all attributes from the sample.
--
The SetInputStreamState method sets the Device MFT input stream state and media type.
-Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
When
The method returns an
Return code | Description |
---|---|
| Initialization succeeded |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface function helps to transition the input stream to a specified state with a specified media type set on the input stream. This will be used by device transform manager (DTM) when the Device MFT requests a specific input stream?s state and media type to be changed. Device MFT would need to request such a change when one of the Device MFT's output changes.
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, if Output 2?s media type changes to 1080p, Device MFT has to change Input 1's media type to 1080p. To achieve this, Device MFT should request DTM to call this method using the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
-Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The method returns an
Return code | Description |
---|---|
| Transitioning the stream state succeeded. |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Initializes the Digital Living Network Alliance (DLNA) media sink.
The DLNA media sink exposes this interface. To get a reference to this interface, call CoCreateInstance. The CLSID is CLSID_MPEG2DLNASink.
-Initializes the Digital Living Network Alliance (DLNA) media sink.
-Pointer to a byte stream. The DLNA media sink writes data to this byte stream. The byte stream must be writable.
If TRUE, the DLNA media sink accepts PAL video formats. Otherwise, it accepts NTSC video formats.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The method was already called. |
| The media sink's |
?
Configures Windows Media Digital Rights Management (DRM) for Network Devices on a network sink.
The Advanced Systems Format (ASF) streaming media sink exposes this interface. To get a reference to the
For more information, see Remarks.
-To stream protected content over a network, the ASF streaming media sink provides an output trust authority (OTA) that supports Windows Media DRM for Network Devices and implements the
The application gets a reference to
To stream the content, the application does the following:
To stream DRM-protected content over a network from a server to a client, an application must use the Microsoft Media Foundation Protected Media Path (PMP). The media sink and the application-provided HTTP byte stream exist in mfpmp.exe. Therefore, the byte stream must expose the
When the clock starts for the first time or restarts , the encrypter that is used for encrypting samples is retrieved, and the license response is cached.
Gets the license response for the specified request.
-Pointer to a byte array that contains the license request.
Size, in bytes, of the license request.
Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
Receives the key identifier. The caller must release the string by calling SysFreeString.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
Not implemented in this release.
-Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
The method returns E_NOTIMPL.
Represents a buffer that contains a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
-To create a DXGI media buffer, first create the DXGI surface. Then call
Gets the index of the subresource that is associated with this media buffer.
-The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Queries the Microsoft DirectX Graphics Infrastructure (DXGI) surface for an interface.
-The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| Invalid request. |
?
You can use this method to get a reference to the
Gets the index of the subresource that is associated with this media buffer.
-Receives the zero-based index of the subresource.
If this method succeeds, it returns
The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Gets an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| The specified key was not found. |
?
Stores an arbitrary
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| An item already exists with this key. |
?
To retrieve the reference from the object, call
Provides functionality for getting the
Gets the
Gets the
If this method succeeds, it returns
Enables an application to use a Media Foundation transform (MFT) that has restrictions on its use.
-If you register an MFT that requires unlocking, include the
Unlocks a Media Foundation transform (MFT) so that the application can use it.
-A reference to the
If this method succeeds, it returns
This method authenticates the caller, using a private communication channel between the MFT and the object that implements the
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-
Sets the number of input pins on the EVR filter.
-Specifies the total number of input pins on the EVR filter. This value includes the input pin for the reference stream, which is created by default. For example, to mix one substream plus the reference stream, set this parameter to 2.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid number of streams. The minimum is one, and the maximum is 16. |
| This method has already been called, or at least one pin is already connected. |
?
After this method has been called, it cannot be called a second time on the same instance of the EVR filter. Also, the method fails if any input pins are connected.
-
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-Receives the number of streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Configures the DirectShow Enhanced Video Renderer (EVR) filter. To get a reference to this interface, call QueryInterface on the EVR filter.
-Gets or sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
-Sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter (EVR).
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
Gets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
-If this method succeeds, it returns
Optionally supported by media sinks to perform required tasks before shutdown. This interface is typically exposed by archive sinks?that is, media sinks that write to a file. It is used to perform tasks such as flushing data to disk or updating a file header.
To get a reference to this interface, call QueryInterface on the media sink.
-If a media sink exposes this interface, the Media Session will call BeginFinalize on the sink before the session closes.
-
Notifies the media sink to asynchronously take any steps it needs to finish its tasks.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Many archive media sinks have steps they need to do at the end of archiving to complete their file operations, such as updating the header (for some formats) or flushing all pending writes to disk. In some cases, this may include expensive operations such as indexing the content. BeginFinalize is an asynchronous way to initiate final tasks.
When the finalize operation is complete, the callback object's
Completes an asynchronous finalize operation.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method after the
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Enables a media source in the application process to create objects in the protected media path (PMP) process.
-This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Callback interface to notify the application when an asynchronous method completes.
-For more information about asynchronous methods in Microsoft Media Foundation, see Asynchronous Callback Methods.
This interface is also used to perform a work item in a Media Foundation work-queue. For more information, see Work Queues.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Applies to: desktop apps | Metro style apps
Called when an asynchronous operation is completed.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Within your implementation of Invoke, call the corresponding End... method.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides logging information about the parent object the async callback is associated with.
-Media sources are objects that generate media data in the Media Foundation pipeline. This section describes the media source APIs in detail. Read this section if you are implementing a custom media source, or using a media source outside of the Media Foundation pipeline.
If your application uses the control layer, it needs to use only a limited subset of the media source APIs. For information, see the topic Using Media Sources with the Media Session.
-
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
-The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
- Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
-Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The GetCurrentOperationMode
method retrieves the optimization features in effect.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the current features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetCurrentSampleRequirements
method retrieves the optimization features required to process the next sample, given the features already agreed to by the application.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the required features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
After an application calls the
Before processing a sample, the application can call this method. If the DMO does not require a given feature in order to process the next sample, it omits the corresponding flag from the pdwRequestedFeatures parameter. For the next sample only, the application can ignore the feature. The results of this method are valid only for the next call to the
The DMO will return only the flags that were agreed to in the SetOperationMode method. In other words, you cannot dynamically enable new features with this method.
-
The Next
method retrieves a specified number of items in the enumeration sequence.
Number of items to retrieve.
Array of size cItemsToFetch that is filled with the CLSIDs of the enumerated DMOs.
Array of size cItemsToFetch that is filled with the friendly names of the enumerated DMOs.
Pointer to a variable that receives the actual number of items retrieved. Can be
Returns an
Return code | Description |
---|---|
| Invalid argument. |
| Insufficient memory. |
| |
| Retrieved fewer items than requested. |
| Retrieved the requested number of items. |
?
If the method succeeds, the arrays given by the pCLSID and Names parameters are filled with CLSIDs and wide-character strings. The value of *pcItemsFetched specifies the number of items returned in these arrays.
The method returns
The caller must free the memory allocated for each string returned in the Names parameter, using the CoTaskMemFree function.
-
The Reset
method resets the enumeration sequence to the beginning.
Returns
The
interface provides methods for manipulating a data buffer. Buffers passed to the
The
interface provides methods for manipulating a Microsoft DirectX Media Object (DMO).
The GetOutputStreamInfo
method retrieves information about an output stream; for example, whether the stream is discardable, and whether it uses a fixed sample size. This information never changes.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives a bitwise combination of zero or more DMO_OUTPUT_STREAM_INFO_FLAGS flags.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetInputType
method retrieves a preferred media type for a specified input stream.
Zero-based index of an input stream on the DMO.
Zero-based index on the set of acceptable media types.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Type index is out of range. |
| Insufficient memory. |
| |
| Success. |
?
Call this method to enumerate an input stream's preferred media types. The DMO assigns each media type an index value in order of preference. The most preferred type has an index of zero. To enumerate all the types, make successive calls while incrementing the type index until the method returns DMO_E_NO_MORE_ITEMS. The DMO is not guaranteed to enumerate every media type that it supports.
The format block in the returned type might be
If the method succeeds, call MoFreeMediaType to free the format block. (This function is also safe to call when the format block is
To set the media type, call the
To test whether a particular media type is acceptable, call SetInputType with the
To test whether the dwTypeIndex parameter is in range, set pmt to
The SetInputType
method sets the media type on an input stream, or tests whether a media type is acceptable.
Zero-based index of an input stream on the DMO.
Pointer to a
Bitwise combination of zero or more flags from the DMO_SET_TYPE_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| Media type was not accepted |
| Media type is not acceptable |
| Media type was set successfully, or is acceptable |
?
Call this method to test, set, or clear the media type on an input stream:
The media types that are currently set on other streams can affect whether the media type is acceptable.
-
The GetInputCurrentType
method retrieves the media type that was set for an input stream, if any.
Zero-based index of an input stream on the DMO.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Insufficient memory. |
| Success. |
?
The caller must set the media type for the stream before calling this method. To set the media type, call the
If the method succeeds, call MoFreeMediaType to free the format block.
-
The GetInputSizeInfo
method retrieves the buffer requirements for a specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to a variable that receives the minimum size of an input buffer for this stream, in bytes.
Pointer to a variable that receives the maximum amount of data that the DMO will hold for lookahead, in bytes. If the DMO does not perform lookahead on the stream, the value is zero.
Pointer to a variable that receives the required buffer alignment, in bytes. If the input stream has no alignment requirement, the value is 1.
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Success. |
?
The buffer requirements may depend on the media types of the various streams. Before calling this method, set the media type of each stream by calling the
If the DMO performs lookahead on the input stream, it returns the
A buffer is aligned if the buffer's start address is a multiple of *pcbAlignment. The alignment must be a power of two. Depending on the microprocessor, reads and writes to an aligned buffer might be faster than to an unaligned buffer. Also, some microprocessors do not support unaligned reads and writes.
-
The Flush
method flushes all internally buffered data.
Returns
The DMO performs the following actions when this method is called:
Media types, maximum latency, and locked state do not change.
When the method returns, every input stream accepts data. Output streams cannot produce any data until the application calls the
The Discontinuity
method signals a discontinuity on the specified input stream.
Zero-based index of an input stream on the DMO.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| The DMO is not accepting input. |
| The input and output types have not been set. |
| Success |
?
A discontinuity represents a break in the input. A discontinuity might occur because no more data is expected, the format is changing, or there is a gap in the data. After a discontinuity, the DMO does not accept further input on that stream until all pending data has been processed. The application should call the
This method might fail if it is called before the client sets the input and output types on the DMO.
-
The ProcessInput
method delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to the buffer's
Bitwise combination of zero or more flags from the DMO_INPUT_DATA_BUFFER_FLAGS enumeration.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a valid time stamp, set the
Reference time specifying the duration of the data in the buffer. If this value is valid, set the
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Data cannot be accepted. |
| No output to process. |
| Success. |
?
The input buffer specified in the pBuffer parameter is read-only. The DMO will not modify the data in this buffer. All write operations occur on the output buffers, which are given in a separate call to the
If the DMO does not process all the data in the buffer, it keeps a reference count on the buffer. It releases the buffer once it has generated all the output, unless it needs to perform lookahead on the data. (To determine whether a DMO performs lookahead, call the
If this method returns DMO_E_NOTACCEPTING, call ProcessOutput until the input stream can accept more data. To determine whether the stream can accept more data, call the
If the method returns S_FALSE, no output was generated from this input and the application does not need to call ProcessOutput. However, a DMO is not required to return S_FALSE in this situation; it might return
The ProcessOutput
method generates output from the current input data.
Bitwise combination of zero or more flags from the DMO_PROCESS_OUTPUT_FLAGS enumeration.
Number of output buffers.
Pointer to an array of
Pointer to a variable that receives a reserved value (zero). The application should ignore this value.
Returns an
Return code | Description |
---|---|
| Failure |
| Invalid argument |
| |
| No output was generated |
| Success |
?
The pOutputBuffers parameter points to an array of
Each
When the application calls ProcessOutput
, the DMO processes as much input data as possible. It writes the output data to the output buffers, starting from the end of the data in each buffer. (To find the end of the data, call the
If the DMO fills an entire output buffer and still has input data to process, the DMO returns the
If the method returns S_FALSE, no output was generated. However, a DMO is not required to return S_FALSE in this situation; it might return
Discarding data:
You can discard data from a stream by setting the
For each stream in which pBuffer is
To check whether a stream is discardable or optional, call the
The Lock
method acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple operations.
Value that specifies whether to acquire or release the lock. If the value is non-zero, a lock is acquired. If the value is zero, the lock is released.
Returns an
Return code | Description |
---|---|
| Failure |
| Success |
?
This method prevents other threads from calling methods on the DMO. If another thread calls a method on the DMO, the thread blocks until the lock is released.
If you are using the Active Template Library (ATL) to implement a DMO, the name of the Lock method conflicts with the CComObjectRootEx::Lock method. To work around this problem, define the preprocessor symbol FIX_LOCK_NAME before including the header file Dmo.h:
#define FIX_LOCK_NAME - #include <dmo.h> -
This directive causes the preprocessor to rename the
The GetLatency
method retrieves the latency introduced by this DMO.
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
-
The Clone
method creates a copy of the DMO in its current state.
Address of a reference to receive the new DMO's
Returns
If the method succeeds, the
The GetLatency
method retrieves the latency introduced by this DMO.
Pointer to a variable that receives the latency, in 100-nanosecond units.
Returns
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
-Enables other components in the protected media path (PMP) to use the input protection system provided by an input trust authorities (ITA). An ITA is a component that implements an input protection system for media content. ITAs expose the
An ITA translates policy from the content's native format into a common format that is used by other PMP components. It also provides a decrypter, if one is needed to decrypt the stream.
The topology contains one ITA instance for every protected stream in the media source. The ITA is obtained from the media source by calling
Retrieves a decrypter transform.
-Interface identifier (IID) of the interface being requested. Currently this value must be IID_IMFTransform, which requests the
Receives a reference to the interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The decrypter does not support the requested interface. |
| This input trust authority (ITA) does not provide a decrypter. |
?
The decrypter should be created in a disabled state, where any calls to
An ITA is not required to provide a decrypter. If the source content is not encrypted, the method should return
The ITA must create a new instance of its decrypter for each call to GetDecrypter. Do not return multiple references to the same decrypter. They must be separate instances because the Media Session might place them in two different branches of the topology.
-
Requests permission to perform a specified action on the stream.
-The requested action, specified as a member of the
Receives the value
The method returns an
Return code | Description |
---|---|
| The user has permission to perform this action. |
| The user must individualize the application. |
| The user must obtain a license. |
?
This method verifies whether the user has permission to perform a specified action on the stream. The ITA does any work needed to verify the user's right to perform the action, such as checking licenses.
To verify the user's rights, the ITA might need to perform additional steps that require interaction with the user or consent from the user. For example, it might need to acquire a new license or individualize a DRM component. In that case, the ITA creates an activation object for a content enabler and returns the activation object's
The Media Session returns the
The application calls
The application calls
The Media Session calls RequestAccess again.
The return value signals whether the user has permission to perform the action:
If the user already has permission to perform the action, the method returns
If the user does not have permission, the method returns a failure code and sets *ppContentEnablerActivate to
If the ITA must perform additional steps that require interaction with the user, the method returns a failure code and returns the content enabler's
The Media Session will not allow the action unless this method returns
A stream can go to multiple outputs, so this method might be called multiple times with different actions, once for every output.
-
Retrieves the policy that defines which output protection systems are allowed for this stream, and the configuration data for each protection system.
-The action that will be performed on this stream, specified as a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the input trust authority (ITA) that a requested action is about to be performed.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, the Media Session calls
Notifies the input trust authority (ITA) when the number of output trust authorities (OTAs) that will perform a specified action has changed.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ITA can update its internal state if needed. If the method returns a failure code, the Media Session cancels the action.
-
Resets the input trust authority (ITA) to its initial state.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, the ITA should disable any decrypter that was returned in the
Registers Media Foundation transforms (MFTs) in the caller's process.
The Media Session exposes this interface as a service. To obtain a reference to this interface, call the
This interface requires the Media Session. If you are not using the Media Session for playback, call one of the following functions instead:
Registers one or more Media Foundation transforms (MFTs) in the caller's process.
-A reference to an array of
The number of elements in the pMFTs array.
If this method succeeds, it returns
This method is similar to the
Unlike
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
-To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with a key.
- A
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the data type of the value associated with a key.
-Receives a member of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a stored attribute value equals to a specified
Receives a Boolean value indicating whether the attribute matches the value given in Value. See Remarks. This parameter must not be
The method sets pbResult to
No attribute is found whose key matches the one given in guidKey.
The attribute's
The attribute value does not match the value given in Value.
The method fails.
Otherwise, the method sets pbResult to TRUE.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares the attributes on this object with the attributes on another object.
-Pointer to the
Member of the
Receives a Boolean value. The value is TRUE if the two sets of attributes match in the way specified by the MatchType parameter. Otherwise, the value is
If pThis is the object whose Compare method is called, and pTheirs is the object passed in as the pTheirs parameter, the following comparisons are defined by MatchType.
Match type | Returns TRUE if and only if |
---|---|
For every attribute in pThis, an attribute with the same key and value exists in pTheirs. | |
For every attribute in pTheirs, an attribute with the same key and value exists in pThis. | |
The key/value pairs are identical in both objects. | |
Take the intersection of the keys in pThis and the keys in pTheirs. The values associated with those keys are identical in both pThis and pTheirs. | |
Take the object with the smallest number of attributes. For every attribute in that object, an attribute with the same key and value exists in the other object. |
?
The pTheirs and pbResult parameters must not be
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT32 value associated with a key.
-Receives a UINT32 value. If the key is found and the data type is UINT32, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT64 value associated with a key.
-Receives a UINT64 value. If the key is found and the data type is UINT64, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a double value associated with a key.
-Receives a double value. If the key is found and the data type is double, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a
Receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a string value associated with a key.
-If the key is found and the value is a string type, this parameter receives the number of characters in the string, not including the terminating
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a wide-character string associated with a key.
-Pointer to a wide-character array allocated by the caller. The array must be large enough to hold the string, including the terminating
The size of the pwszValue array, in characters. This value includes the terminating
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The length of the string is too large to fit in a UINT32 value. |
| The buffer is not large enough to hold the string. |
| The specified key was not found. |
| The attribute value is not a string. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a wide-character string associated with a key. This method allocates the memory for the string.
-A
If the key is found and the value is a string type, this parameter receives a copy of the string. The caller must free the memory for the string by calling CoTaskMemFree.
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
| The attribute value is not a string. |
?
To copy a string value into a caller-allocated buffer, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a byte array associated with a key.
-If the key is found and the value is a byte array, this parameter receives the size of the array, in bytes.
To get the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a byte array associated with a key. This method copies the array into a caller-allocated buffer.
-Pointer to a buffer allocated by the caller. If the key is found and the value is a byte array, the method copies the array into this buffer. To find the required size of the buffer, call
The size of the pBuf buffer, in bytes.
Receives the size of the byte array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer is not large enough to the array. |
| The specified key was not found. |
| The attribute value is not a byte array. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an interface reference associated with a key.
-Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The attribute value is an |
| The specified key was not found. |
| The attribute value is not an |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a key/value pair from the object's attribute list.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the specified key does not exist, the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all key/value pairs from the object's attribute list.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT32 value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT32 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT64 value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT64 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a double value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the double value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
?
To retrieve the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a wide-character string with a key.
-Null-terminated wide-character string to associate with this key. The method stores a copy of the string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the string, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a byte array with a key.
-Pointer to a byte array to associate with this key. The method stores a copy of the array.
Size of the array, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the
It is not an error to call SetUnknown with pUnknown equal to
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Locks the attribute store so that no other thread can access it. If the attribute store is already locked by another thread, this method blocks until the other thread unlocks the object. After calling this method, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method can cause a deadlock if a thread that calls LockStore waits on a thread that calls any other
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the attribute store after a call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
-Receives the number of attributes. This parameter must not be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an attribute at the specified index.
-Index of the attribute to retrieve. To get the number of attributes, call
Receives the
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
?
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies all of the attributes from this object into another attribute store.
- A reference to the
If this method succeeds, it returns
This method deletes all of the attributes originally stored in pDest.
Note??When you call CopyAllItems on an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
-Applies to: desktop apps | Metro style apps
Retrieves an attribute at the specified index.
-Index of the attribute to retrieve. To get the number of attributes, call
Receives the
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a block of memory that contains media data. Use this interface to access the data in the buffer.
-If the buffer contains 2-D image data (such as an uncompressed video frame), you should query the buffer for the
To get a buffer from a media sample, call one of the following
To create a new buffer object, use one of the following functions.
Function | Description |
---|---|
| Creates a buffer and allocates system memory. |
| Creates a media buffer that wraps an existing media buffer. |
| Creates a buffer that manages a DirectX surface. |
| Creates a buffer and allocates system memory with a specified alignment. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
-The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gives the caller access to the memory in the buffer, for reading or writing
-Receives the maximum amount of data that can be written to the buffer. This parameter can be
Receives the length of the valid data in the buffer, in bytes. This parameter can be
Receives a reference to the start of the buffer.
This method gives the caller access to the entire buffer, up to the maximum size returned in the pcbMaxLength parameter. The value returned in pcbCurrentLength is the size of any valid data already in the buffer, which might be less than the total buffer size.
The reference returned in ppbBuffer is guaranteed to be valid, and can safely be accessed across the entire buffer for as long as the lock is held. When you are done accessing the buffer, call
Locking the buffer does not prevent other threads from calling Lock, so you should not rely on this method to synchronize threads.
This method does not allocate any memory, or transfer ownership of the memory to the caller. Do not release or free the memory; the media buffer will free the memory when the media buffer is destroyed.
If you modify the contents of the buffer, update the current length by calling
If the buffer supports the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks a buffer that was previously locked. Call this method once for every call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| For Direct3D surface buffers, an error occurred when unlocking the surface. |
?
It is an error to call Unlock if you did not call Lock previously.
After calling this method, do not use the reference returned by the Lock method. It is no longer guaranteed to be valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
-Receives the length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the valid data in the buffer.
-Length of the valid data, in bytes. This value cannot be greater than the allocated size of the buffer, which is returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified length is greater than the maximum size of the buffer. |
?
Call this method if you write data into the buffer.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
-Receives the allocated size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Enables an application to play audio or video files.
-The Media Engine implements this interface. To create an instance of the Media Engine, call
This interface is extended with
Gets the most recent error status.
-This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
-Sets the current error code.
-Sets a list of media sources.
-This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Gets the current network state of the media engine.
-This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the preload flag.
-This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Queries how much resource data the media engine has buffered.
-This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Gets the ready state, which indicates whether the current media resource can be rendered.
-This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine is currently seeking to a new playback position.
-This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the current playback position.
-This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the initial playback position.
-This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the duration of the media resource.
-This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
-This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the default playback rate.
-This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Gets or sets the current playback rate.
-This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges that have been rendered.
-This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges to which the Media Engine can currently seek.
-This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
-This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine automatically begins playback.
-This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Queries whether the Media Engine will loop playback.
-This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
-Queries whether the audio is muted.
-Gets or sets the audio volume level.
-Gets the most recent error status.
-Receives either a reference to the
If this method succeeds, it returns
This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
-Sets the current error code.
-The error code, as an
If this method succeeds, it returns
Sets a list of media sources.
-A reference to the
If this method succeeds, it returns
This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Sets the URL of a media resource.
-The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Gets the URL of the current media resource, or an empty string if no media resource is present.
-Receives a BSTR that contains the URL of the current media resource. If there is no media resource, ppUrl receives an empty string. The caller must free the BSTR by calling SysFreeString.
If this method succeeds, it returns
This method corresponds to the currentSrc attribute of the HTMLMediaElement interface in HTML5.
Initially, the current media resource is empty. It is updated when the Media Engine performs the resource selection algorithm.
-Gets the current network state of the media engine.
-Returns an
This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
-Gets the preload flag.
-Returns an
This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Sets the preload flag.
-An
If this method succeeds, it returns
This method corresponds to setting the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Queries how much resource data the media engine has buffered.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Loads the current media source.
-If this method succeeds, it returns
The main purpose of this method is to reload a list of source elements after updating the list. For more information, see SetSourceElements. Otherwise, calling this method is generally not required. To load a new media source, call
The Load method explictly invokes the Media Engine's media resource loading algorithm. Before calling this method, you must set the media resource by calling
This method completes asynchronously. When the Load operation starts, the Media Engine sends an
If the Media Engine is unable to load the file, the Media Engine sends an
For more information about event handling in the Media Engine, see
This method corresponds to the load method of the HTMLMediaElement interface in HTML5.
-Queries how likely it is that the Media Engine can play a specified type of media resource.
-A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives an
If this method succeeds, it returns
This method corresponds to the canPlayType attribute of the HTMLMediaElement interface in HTML5.
The canPlayType attribute defines the following values.
Value | Description |
---|---|
"" (empty string) | The user-agent cannot play the resource, or the resource type is "application/octet-stream". |
"probably" | The user-agent probably can play the resource. |
"maybe" | Neither of the previous values applies. |
?
The value "probably" is used because a MIME type for a media resource is generally not a complete description of the resource. For example, "video/mp4" specifies an MP4 file with video, but does not describe the codec. Even with the optional codecs parameter, the MIME type omits some information, such as the actual coded bit rate. Therefore, it is usually impossible to be certain that playback is possible until the actual media resource is opened.
-Gets the ready state, which indicates whether the current media resource can be rendered.
-Returns an
This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine is currently seeking to a new playback position.
-Returns TRUE if the Media Engine is seeking, or
This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
-Gets the current playback position.
-Returns the playback position, in seconds.
This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
-Seeks to a new playback position.
-The new playback position, in seconds.
If this method succeeds, it returns
This method corresponds to setting the currentTime attribute of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the seek operation starts, the Media Engine sends an
Gets the initial playback position.
-Returns the initial playback position, in seconds.
This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the duration of the media resource.
-Returns the duration, in seconds. If no media data is available, the method returns not-a-number (NaN). If the duration is unbounded, the method returns an infinite value.
This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
-Returns TRUE if playback is paused, or
This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
-Gets the default playback rate.
-Returns the default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Sets the default playback rate.
-The default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the current playback rate.
-Returns the playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Sets the current playback rate.
-The playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges that have been rendered.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges to which the Media Engine can currently seek.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
-Returns TRUE if the direction of playback is forward and playback has reached the end of the media resource. Returns
This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine automatically begins playback.
-Returns TRUE if the Media Engine automatically begins playback, or
This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Specifies whether the Media Engine automatically begins playback.
-If TRUE, the Media Engine automatically begins playback after it loads a media source. Otherwise, playback does not begin until the application calls
If this method succeeds, it returns
This method corresponds to setting the autoplay attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine will loop playback.
-Returns TRUE if looping is enabled, or
This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
-Specifies whether the Media Engine loops playback.
-Specify TRUE to enable looping, or
If this method succeeds, it returns
If Loop is TRUE, playback loops back to the beginning when it reaches the end of the source.
This method corresponds to setting the loop attribute of the HTMLMediaElement interface in HTML5.
-Starts playback.
-If this method succeeds, it returns
This method corresponds to the play method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the operation starts, the Media Engine sends an
Pauses playback.
-If this method succeeds, it returns
This method corresponds to the pause method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the transition to paused is complete, the Media Engine sends an
Queries whether the audio is muted.
-Returns TRUE if the audio is muted, or
Mutes or unmutes the audio.
-Specify TRUE to mute the audio, or
If this method succeeds, it returns
Gets the audio volume level.
-Returns the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
Sets the audio volume level.
-The volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the current media resource contains a video stream.
-Returns TRUE if the current media resource contains a video stream. Returns
Queries whether the current media resource contains an audio stream.
-Returns TRUE if the current media resource contains an audio stream. Returns
Gets the size of the video frame, adjusted for aspect ratio.
-Receives the width in pixels.
Receives the height in pixels.
If this method succeeds, it returns
This method adjusts for the correct picture aspect ratio. - For example, if the encoded frame is 720 ? 420 and the picture aspect ratio is 4:3, the method will return a size equal to 640 ? 480 pixels.
-Gets the picture aspect ratio of the video stream.
-Receives the x component of the aspect ratio.
Receives the y component of the aspect ratio.
If this method succeeds, it returns
The Media Engine automatically converts the pixel aspect ratio to 1:1 (square pixels).
-Shuts down the Media Engine and releases the resources it is using.
-If this method succeeds, it returns
Copies the current video frame to a DXGI surface or WIC bitmap.
-A reference to the
A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In frame-server mode, call this method to blit the video frame to a DXGI or WIC surface. The application can call this method at any time after the Media Engine loads a video resource. Typically, however, the application calls
The Media Engine scales and letterboxes the video to fit the destination rectangle. It fills the letterbox area with the border color.
For protected content, call the
Queries the Media Engine to find out whether a new video frame is ready.
-If a new frame is ready, receives the presentation time of the frame.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded, but the Media Engine does not have a new frame. |
| A new video frame is ready for display. |
?
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Queries the Media Engine to find out whether a new video frame is ready.
-If a new frame is ready, receives the presentation time of the frame.
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the URL of a media resource.
-The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Creates a new instance of the Media Engine.
-Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
-Creates a new instance of the Media Engine.
-A bitwise OR of zero or more flags from the
A reference to the
This parameter specifies configuration attributes for the Media Engine. Call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| A required attribute was missing from pAttr, or an invalid combination of attributes was used. |
?
Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
-Creates a time range object.
-Receives a reference to the
If this method succeeds, it returns
Creates a media error object.
-Receives a reference to the
If this method succeeds, it returns
Creates an instance of the
Creates a media keys object based on the specified key system.
-The media key system.
Points to the default file location for the store Content Decryption Module (CDM) data.
Points to a the inprivate location for the store Content Decryption Module (CDM) data. Specifying this path allows the CDM to comply with the application?s privacy policy by putting personal information in the file location indicated by this path.
Receives the media keys.
If this method succeeds, it returns
Gets a value that indicates if the specified key system supports the specified media type.
-Creates an instance of
If this method succeeds, it returns
Creates a media keys object based on the specified key system.
-The media keys system.
Points to a location to store Content Decryption Module (CDM) data which might be locked by multiple process and so might be incompatible with store app suspension.
The media keys.
If this method succeeds, it returns
Checks if keySystem is a supported key system and creates the related Content Decryption Module (CDM). -
-Gets a value that indicates if the specified key system supports the specified media type.
-The MIME type to check support for.
The key system to check support for.
true if type is supported by keySystem; otherwise, false.
If this method succeeds, it returns
Implemented by the media engine to add encrypted media extensions methods.
-Gets the media keys object associated with the media engine or null if there is not a media keys object.
-Sets the media keys object to use with the media engine.
-Gets the media keys object associated with the media engine or null if there is not a media keys object.
-The media keys object associated with the media engine or null if there is not a media keys object.
If this method succeeds, it returns
Sets the media keys object to use with the media engine.
-The media keys.
If this method succeeds, it returns
Extends the
The
Gets or sets the audio balance.
-Gets various flags that describe the media resource.
-Gets the number of streams in the media resource.
-Queries whether the media resource contains protected content.
-Gets or sets the time of the next timeline marker, if any.
-Queries whether the media resource contains stereoscopic 3D video.
-For stereoscopic 3D video, gets the layout of the two views within a video frame.
-For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
-Gets a handle to the windowless swap chain.
-To enable windowless swap-chain mode, call
Gets or sets the audio stream category used for the next call to SetSource or Load.
-For information on audio stream categories, see
Gets or sets the audio device endpoint role used for the next call to SetSource or Load.
-For information on audio endpoint roles, see ERole enumeration.
-Gets or sets the real time mode used for the next call to SetSource or Load.
-Opens a media resource from a byte stream.
-A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Gets a playback statistic from the Media Engine.
-A member of the
A reference to a
If this method succeeds, it returns
Updates the source rectangle, destination rectangle, and border color for the video.
-A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In rendering mode, call this method to reposition the video, update the border color, or repaint the video frame. If all of the parameters are
In frame-server mode, this method has no effect.
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Gets the audio balance.
-Returns the balance. The value can be any number in the following range (inclusive).
Return value | Description |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
Sets the audio balance.
-The audio balance. The value can be any number in the following range (inclusive).
Value | Meaning |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the Media Engine can play at a specified playback rate.
-The requested playback rate.
Returns TRUE if the playback rate is supported, or
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 is normal playback speed, 0.5 is half speed, and 2.0 is 2? speed. Positive values mean forward playback, and negative values mean reverse playback.
The results of this method can vary depending on the media resource that is currently loaded. Some media formats might support faster playback rates than others. Also, some formats might not support reverse play.
-Steps forward or backward one frame.
-Specify TRUE to step forward or
If this method succeeds, it returns
The frame-step direction is independent of the current playback direction.
This method completes asynchronously. When the operation completes, the Media Engine sends an
Gets various flags that describe the media resource.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets a presentation attribute from the media resource.
-The attribute to query. For a list of presentation attributes, see Presentation Descriptor Attributes.
A reference to a
If this method succeeds, it returns
Gets the number of streams in the media resource.
-Receives the number of streams.
If this method succeeds, it returns
Gets a stream-level attribute from the media resource.
-The zero-based index of the stream. To get the number of streams, call
The attribute to query. Possible values are listed in the following topics:
A reference to a
If this method succeeds, it returns
Queries whether a stream is selected to play.
-The zero-based index of the stream. To get the number of streams, call
Receives a Boolean value.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Selects or deselects a stream for playback.
-The zero-based index of the stream. To get the number of streams, call
Specifies whether to select or deselect the stream.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Applies the stream selections from previous calls to SetStreamSelection.
-If this method succeeds, it returns
Queries whether the media resource contains protected content.
-Receives the value TRUE if the media resource contains protected content, or
If this method succeeds, it returns
Inserts a video effect.
-One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of video effects was reached. |
?
The effect is applied when the next media resource is loaded.
-Inserts an audio effect.
-One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of audio effects was reached. |
?
The effect is applied when the next media resource is loaded.
-Removes all audio and video effects.
-If this method succeeds, it returns
Call this method to remove all of the effects that were added with the InsertAudioEffect and InsertVideoEffect methods.
-Specifies a presentation time when the Media Engine will send a marker event.
-The presentation time for the marker event, in seconds.
If this method succeeds, it returns
When playback reaches the time specified by timeToFire, the Media Engine sends an
If the application seeks past the marker point, the Media Engine cancels the marker and does not send the event.
During forward playback, set timeToFire to a value greater than the current playback position. During reverse playback, set timeToFire to a value less than the playback position.
To cancel a marker, call
Gets the time of the next timeline marker, if any.
-Receives the marker time, in seconds. If no marker is set, this parameter receives the value NaN.
If this method succeeds, it returns
Cancels the next pending timeline marker.
-If this method succeeds, it returns
Call this method to cancel the
Queries whether the media resource contains stereoscopic 3D video.
-Returns TRUE if the media resource contains 3D video, or
For stereoscopic 3D video, gets the layout of the two views within a video frame.
-Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, sets the layout of the two views within a video frame.
-A member of the
If this method succeeds, it returns
For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
-Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, specifies how the Media Engine renders the 3D video content.
-A member of the
If this method succeeds, it returns
Enables or disables windowless swap-chain mode.
-If TRUE, windowless swap-chain mode is enabled.
If this method succeeds, it returns
In windowless swap-chain mode, the Media Engine creates a windowless swap chain and presents video frames to the swap chain. To render the video, call
Gets a handle to the windowless swap chain.
-Receives a handle to the swap chain.
If this method succeeds, it returns
To enable windowless swap-chain mode, call
Enables or disables mirroring of the video.
-If TRUE, the video is mirrored horizontally. Otherwise, the video is displayed normally.
If this method succeeds, it returns
Gets the audio stream category used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio stream categories, see
Sets the audio stream category for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio stream categories, see
Gets the audio device endpoint role used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
-Sets the audio device endpoint used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
-Gets the real time mode used for the next call to SetSource or Load.
-If this method succeeds, it returns
Sets the real time mode used for the next call to SetSource or Load.
-If this method succeeds, it returns
Seeks to a new playback position using the specified
If this method succeeds, it returns
Enables or disables the time update timer.
-If TRUE, the update timer is enabled. Otherwise, the timer is disabled.
If this method succeeds, it returns
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Opens a media resource from a byte stream.
-A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Enables an application to load media resources in the Media Engine.
-To use this interface, set the
Queries whether the object can load a specified type of media resource.
-If TRUE, the Media Engine is set to audio-only mode. Otherwise, the Media Engine is set to audio-video mode.
A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives a member of the
If this method succeeds, it returns
Implement this method if your Media Engine extension supports one or more MIME types.
-Begins an asynchronous request to create either a byte stream or a media source.
-The URL of the media resource.
A reference to the
If the type parameter equals
If type equals
A member of the
Value | Meaning |
---|---|
Create a byte stream. The byte stream must support the | |
Create a media source. The media source must support the |
?
Receives a reference to the
The caller must release the interface. This parameter can be
A reference to the
A reference to the
If this method succeeds, it returns
This method requests the object to create either a byte stream or a media source, depending on the value of the type parameter:
The method is performed asynchronously. The Media Engine calls the
Cancels the current request to create an object.
-The reference that was returned in the the ppIUnknownCancelCookie parameter of the
If this method succeeds, it returns
This method attempts to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might complete before the operation can be canceled.
-Completes an asynchronous request to create a byte stream or media source.
-A reference to the
Receives a reference to the
If this method succeeds, it returns
The Media Engine calls this method to complete the
Represents a callback to the media engine to notify key request data.
-Notifies the application that a key or keys are needed along with any initialization data.
-The initialization data.
The count in bytes of initData.
Callback interface for the
To set the callback reference on the Media Engine, set the
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Notifies the application when a playback event occurs.
-A member of the
The first event parameter. The meaning of this parameter depends on the event code.
The second event parameter. The meaning of this parameter depends on the event code.
If this method succeeds, it returns
Provides methods for getting information about the Output Protection Manager (OPM).
-To get a reference to this interface, call QueryInterface on the Media Engine.
The
Gets status information about the Output Protection Manager (OPM).
-The method returns an
Return code | Description |
---|---|
| The method succeeded |
| If any of the parameters are |
?
Copies a protected video frame to a DXGI surface.
-For protected content, call this method instead of the
Gets the content protections that must be applied in frame-server mode.
-Specifies the window that should receive output link protections.
-In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
-Sets the content protection manager (CPM).
-The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
-Enables the Media Engine to access protected content while in frame-server mode.
-A reference to the Direct3D?11 device content. The Media Engine queries this reference for the
If this method succeeds, it returns
In frame-server mode, this method enables the Media Engine to share protected content with the Direct3D?11 device.
-Gets the content protections that must be applied in frame-server mode.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Specifies the window that should receive output link protections.
-A handle to the window.
If this method succeeds, it returns
In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
-Copies a protected video frame to a DXGI surface.
-A reference to the
A reference to an
A reference to a
A reference to an
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
For protected content, call this method instead of the
Sets the content protection manager (CPM).
-A reference to the
If this method succeeds, it returns
The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
-Sets the application's certificate.
-A reference to a buffer that contains the certificate in X.509 format, followed by the application identifier signed with a SHA-256 signature using the private key from the certificate.
The size of the pbBlob buffer, in bytes.
If this method succeeds, it returns
Call this method to access protected video content in frame-server mode.
-Provides the Media Engine with a list of media resources.
-The
This interface enables the application to provide the same audio/video content in several different encoding formats, such as H.264 and Windows Media Video. If a particular codec is not present on the user's computer, the Media Engine will try the next URL in the list. To use this interface, do the following:
Gets the number of source elements in the list.
-Gets the number of source elements in the list.
-Returns the number of source elements.
Gets the URL of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the URL of the source element. The caller must free the BSTR by calling SysFreeString. If no URL is set, this parameter receives the value
If this method succeeds, it returns
Gets the MIME type of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the MIME type. The caller must free the BSTR by calling SysFreeString. If no MIME type is set, this parameter receives the value
If this method succeeds, it returns
Gets the intended media type of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains a media-query string. The caller must free the BSTR by calling SysFreeString. If no media type is set, this parameter receives the value
If this method succeeds, it returns
The string returned in pMedia should be a media-query string that conforms to the W3C Media Queries specification.
-Adds a source element to the end of the list.
-The URL of the source element, or
The MIME type of the source element, or
A media-query string that specifies the intended media type, or
If this method succeeds, it returns
Any of the parameters to this method can be
This method allocates copies of the BSTRs that are passed in.
-Removes all of the source elements from the list.
-If this method succeeds, it returns
Extends the
Provides an enhanced version of
If this method succeeds, it returns
Gets the key system for the given source element index.
-The source element index.
The MIME type of the source element.
If this method succeeds, it returns
Enables the media source to be transferred between the media engine and the sharing engine for Play To.
-Specifies wether or not the source should be transferred.
-true if the source should be transferred; otherwise, false.
If this method succeeds, it returns
Detaches the media source.
-Receives the byte stream.
Receives the media source.
Receives the media source extension.
If this method succeeds, it returns
Attaches the media source.
-Specifies the byte stream.
Specifies the media source.
Specifies the media source extension.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables playback of web audio.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating if the connecting to Web audio should delay the page's load event.
-True if connection to Web audio should delay the page's load event; otherwise, false.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Connects web audio to Media Engine using the specified sample rate.
-The sample rate of the web audio.
The sample rate of the web audio.
Returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Disconnects web audio from the Media Engine
-Returns
Provides the current error status for the Media Engine.
-The
To get a reference to this interface, call
Gets or sets the extended error code.
-Gets the error code.
-Returns a value from the
Gets the extended error code.
-Returns an
Sets the error code.
-The error code, specified as an
If this method succeeds, it returns
Sets the extended error code.
-An
If this method succeeds, it returns
Represents an event generated by a Media Foundation object. Use this interface to get information about the event.
To get a reference to this interface, call
If you are implementing an object that generates events, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
-To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
-Receives the event type. For a list of event types, see Media Foundation Events.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
Receives the event status. If the operation that generated the event was successful, the value is a success code. A failure code means that an error condition triggered the event.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves events from any Media Foundation object that generates events.
-An object that supports this interface maintains a queue of events. The client of the object can retrieve the events either synchronously or asynchronously. The synchronous method is GetEvent. The asynchronous methods are BeginGetEvent and EndGetEvent.
-
Retrieves the next event in the queue. This method is synchronous.
-Specifies one of the following values.
Value | Meaning |
---|---|
| The method blocks until the event generator queues an event. |
| The method returns immediately. |
?
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request. |
| There are no events in the queue. |
| The object was shut down. |
?
This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Begins an asynchronous request for the next event in the queue.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request with the same callback reference and a different state object. |
| There is a pending request with a different callback reference. |
| The object was shut down. |
| There is a pending request with the same callback reference and state object. |
?
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Completes an asynchronous request for the next event in the queue.
-Pointer to the
Receives a reference to the
Call this method from inside your application's
Puts a new event in the object's queue.
-Specifies the event type. The event type is returned by the event's
The extended type. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object was shut down. |
?
Applies to: desktop apps | Metro style apps
Retrieves the next event in the queue. This method is synchronous.
-This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Applies to: desktop apps | Metro style apps
Begins an asynchronous request for the next event in the queue.
-Pointer to the
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Provides an event queue for applications that need to implement the
This interface is exposed by a helper object that implements an event queue. If you are writing a component that implements the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the next event in the queue. This method is synchronous.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an event in the queue.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains attributes. To create the event object, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets a
Call this method inside your implementation of
You can also call this method when your component needs to raise an event that does not contain attributes. If the event data is an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets an
Specifies the event type of the event to be added to the queue. The event type is returned by the event's
The extended type of the event. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the event queue.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when your component shuts down. After this method is called, all
This method removes all of the events from the queue.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a media keys used for decrypting media data using a Digital Rights Management (DRM) key system.
-Gets the suspend notify interface of the Content Decryption Module (CDM).
-Creates a media key session object using the specified initialization data and custom data. - . -
-The MIME type of the media container used for the content.
The initialization data for the key system.
The count in bytes of initData.
Custom data sent to the key system.
The count in bytes of cbCustomData.
notify
The media key session.
If this method succeeds, it returns
Gets the key system string the
If this method succeeds, it returns
If this method succeeds, it returns
Shutdown should be called by the application before final release. The Content Decryption Module (CDM) reference and any other resources is released at this point. However, related sessions are not freed or closed.
-Gets the suspend notify interface of the Content Decryption Module (CDM).
-The suspend notify interface of the Content Decryption Module (CDM).
If this method succeeds, it returns
Represents a session with the Digital Rights Management (DRM) key system.
-Gets the error state associated with the media key session.
-The error code.
Platform specific error information.
If this method succeeds, it returns
Gets the name of the key system name the media keys object was created with.
-The name of the key system.
If this method succeeds, it returns
Gets a unique session id created for this session.
-The media key session id.
If this method succeeds, it returns
Passes in a key value with any associated data required by the Content Decryption Module for the given key system.
-The count in bytes of key.
If this method succeeds, it returns
Closes the media key session and must be called before the key session is released.
-If this method succeeds, it returns
Provides a mechanism for notifying the app about information regarding the media key session.
-Passes information to the application so it can initiate a key acquisition.
-The URL to send the message to.
The message to send to the application.
The length in bytes of message.
Notifies the application that the key has been added.
-KeyAdded can also be called if the keys requested for the session have already been acquired.
-Notifies the application that an error occurred while processing the key.
-Provides playback controls for protected and unprotected content. The Media Session and the protected media path (PMP) session objects expose this interface. This interface is the primary interface that applications use to control the Media Foundation pipeline.
To obtain a reference to this interface, call
Retrieves the Media Session's presentation clock.
-The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
-Sets a topology on the Media Session.
- Bitwise OR of zero or more flags from the
Pointer to the topology object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The topology has invalid values for one or more of the following attributes: |
| Protected content cannot be played while debugging. |
?
If pTopology is a full topology, set the
If the Media Session is currently paused or stopped, the SetTopology method does not take effect until the next call to
If the Media Session is currently running, or on the next call to Start, the SetTopology method does the following:
This method is asynchronous. If the method returns
Clears all of the presentations that are queued for playback in the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
This method does not clear the current topology; it only removes topologies that are placed in the queue, waiting for playback. To remove the current topology, call
Starts the Media Session.
-Pointer to a
The following time format GUIDs are defined:
Value | Meaning |
---|---|
| Presentation time. The pvarStartPosition parameter must have one of the following
All media sources support this time format. |
| Segment offset. This time format is supported by the Sequencer Source. The starting time is an offset within a segment. Call the |
| Note??Requires Windows?7 or later. ? Skip to a playlist entry. The pvarStartPosition parameter specifies the index of the playlist entry, relative to the current entry. For example, the value 2 skips forward two entries. To skip backward, pass a negative value. The If a media source supports this time format, the |
?
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
When this method is called, the Media Session starts the presentation clock and begins to process media samples.
This method is asynchronous. When the method completes, the Media Session sends an
Pauses the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The Media Session cannot pause while stopped. |
?
This method pauses the presentation clock.
This method is asynchronous. When the operation completes, the Media Session sends an
This method fails if the Media Session is stopped.
-
Stops the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
Closes the Media Session and releases all of the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
After the Close method is called, the only valid methods on the Media Session are the following:
All other methods return
Shuts down the Media Session and releases all the resources used by the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when you are done using the Media Session, before the final call to IUnknown::Release. Otherwise, your application will leak memory.
After this method is called, other
Retrieves the Media Session's presentation clock.
-Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session does not have a presentation clock. |
| The Media Session has been shut down. |
?
The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
-Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| The Media Session can be paused. |
| The Media Session supports forward playback at rates faster than 1.0. |
| The Media Session supports reverse playback. |
| The Media Session can be seeked. |
| The Media Session can be started. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The Media Session has been shut down. |
?
Gets a topology from the Media Session.
This method can get the current topology or a queued topology.
- Bitwise OR of zero or more flags from the
The identifier of the topology. This parameter is ignored if the dwGetFullTopologyFlags parameter contains the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
If the
This method can be used to retrieve the topology for the current presentation or any pending presentations. It cannot be used to retrieve a topology that has already ended.
The topology returned in ppFullTopo is a full topology, not a partial topology.
-Implemented by media sink objects. This interface is the base interface for all Media Foundation media sinks. Stream sinks handle the actual processing of data on each stream.
-
Gets the characteristics of the media sink.
-The characteristics of a media sink are fixed throughout the life time of the sink.
-
Gets the number of stream sinks on this media sink.
-
Gets the presentation clock that was set on the media sink.
-
Gets the characteristics of the media sink.
-Receives a bitwise OR of zero or more flags. The following flags are defined:
Value | Meaning |
---|---|
| The media sink has a fixed number of streams. It does not support the |
| The media sink cannot match rates with an external clock. For best results, this media sink should be used as the time source for the presentation clock. If any other time source is used, the media sink cannot match rates with the clock, with poor results (for example, glitching). This flag should be used sparingly, because it limits how the pipeline can be configured. For more information about the presentation clock, see Presentation Clock. |
| The media sink is rateless. It consumes samples as quickly as possible, and does not synchronize itself to a presentation clock. Most archiving sinks are rateless. |
| The media sink requires a presentation clock. The presentation clock is set by calling the media sink's This flag is obsolete, because all media sinks must support the SetPresentationClock method, even if the media sink ignores the clock (as in a rateless media sink). |
| The media sink can accept preroll samples before the presentation clock starts. The media sink exposes the |
| The first stream sink (index 0) is a reference stream. The reference stream must have a media type before the media types can be set on the other stream sinks. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
The characteristics of a media sink are fixed throughout the life time of the sink.
-
Adds a new stream sink to the media sink.
-Identifier for the new stream. The value is arbitrary but must be unique.
Pointer to the
Receives a reference to the new stream sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
| There is already a stream sink with the same stream identifier. |
| This media sink has a fixed set of stream sinks. New stream sinks cannot be added. |
?
Not all media sinks support this method. If the media sink does not support this method, the
If pMediaType is
Removes a stream sink from the media sink.
-Identifier of the stream to remove. The stream identifier is defined when you call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This particular stream sink cannot be removed. |
| The stream number is not valid. |
| The media sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This media sink has a fixed set of stream sinks. Stream sinks cannot be removed. |
?
After this method is called, the corresponding stream sink object is no longer valid. The
Not all media sinks support this method. If the media sink does not support this method, the
In some cases, the media sink supports this method but does not allow every stream sink to be removed. (For example, it might not allow stream 0 to be removed.)
-
Gets the number of stream sinks on this media sink.
-Receives the number of stream sinks.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
Gets a stream sink, specified by index.
-Zero-based index of the stream. To get the number of streams, call
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
| The media sink's Shutdown method has been called. |
?
Enumerating stream sinks is not a thread-safe operation, because stream sinks can be added or removed between calls to this method.
-
Gets a stream sink, specified by stream identifier.
-Stream identifier of the stream sink.
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
?
If you add a stream sink by calling the
To enumerate the streams by index number instead of stream identifier, call
Sets the presentation clock on the media sink.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation clock does not have a time source. Call SetTimeSource on the presentation clock. |
| The media sink's Shutdown method has been called. |
?
During streaming, the media sink attempts to match rates with the presentation clock. Ideally, the media sink presents samples at the correct time according to the presentation clock and does not fall behind. Rateless media sinks are an exception to this rule, as they consume samples as quickly as possible and ignore the clock. If the sink is rateless, the
The presentation clock must have a time source. Before calling this method, call
If pPresentationClock is non-
All media sinks must support this method.
-
Gets the presentation clock that was set on the media sink.
-Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No clock has been set. To set the presentation clock, call |
| The media sink's Shutdown method has been called. |
?
Shuts down the media sink and releases the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
If the application creates the media sink, it is responsible for calling Shutdown to avoid memory or resource leaks. In most applications, however, the application creates an activation object for the media sink, and the Media Session uses that object to create the media sink. In that case, the Media Session ? not the application ? shuts down the media sink. (For more information, see Activation Objects.)
After this method returns, all methods on the media sink return
Enables a media sink to receive samples before the presentation clock is started.
To get a reference to this interface, call QueryInterface on the media sink.
-Media sinks can implement this interface to support seamless playback and transitions. If a media sink exposes this interface, it can receive samples before the presentation clock starts. It can then pre-process the samples, so that rendering can begin immediately when the clock starts. Prerolling helps to avoid glitches during playback.
If a media sink supports preroll, the media sink's
Notifies the media sink that the presentation clock is about to start.
- The upcoming start time for the presentation clock, in 100-nanosecond units. This time is the same value that will be given to the
If this method succeeds, it returns
After this method is called, the media sink sends any number of
During preroll, the media sink can prepare the samples that it receives, so that they are ready to be rendered. It does not actually render any samples until the clock starts.
-Implemented by media source objects.
Media sources are objects that generate media data. For example, the data might come from a video file, a network stream, or a hardware device, such as a camera. Each media source contains one or more streams, and each stream delivers data of one type, such as audio or video.
-In Windows?8, this interface is extended with
Retrieves the characteristics of the media source.
-The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves the characteristics of the media source.
-Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves a copy of the media source's presentation descriptor. Applications use the presentation descriptor to select streams and to get information about the source content.
-Receives a reference to the presentation descriptor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The presentation descriptor contains the media source's default settings for the presentation. The application can change these settings by selecting or deselecting streams, or by changing the media type on a stream. Do not modify the presentation descriptor unless the source is stopped. The changes take affect when the source's
Starts, seeks, or restarts the media source by specifying where to start playback.
- Pointer to the
Pointer to a
Specifies where to start playback. The units of this parameter are indicated by the time format given in pguidTimeFormat. If the time format is GUID_NULL, the variant type must be VT_I8 or VT_EMPTY. Use VT_I8 to specify a new starting position, in 100-nanosecond units. Use VT_EMPTY to start from the current position. Other time formats might use other
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The start position is past the end of the presentation (ASF media source). |
| A hardware device was unable to start streaming. This error code can be returned by a media source that represents a hardware device, such as a camera. For example, if the camera is already being used by another application, the method might return this error code. |
| The start request is not valid. For example, the start position is past the end of the presentation. |
| The media source's Shutdown method has been called. |
| The media source does not support the time format specified in pguidTimeFormat. |
?
This method is asynchronous. If the operation succeeds, the media source sends the following events:
If the start operation fails asynchronously (after the method returns
A call to Start results in a seek if the previous state was started or paused, and the new starting position is not VT_EMPTY. Not every media source can seek. If a media source can seek, the
Events from the media source are not synchronized with events from the media streams. If you seek a media source, therefore, you can still receive samples from the earlier position after getting the
Stops all active streams in the media source.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
When a media source is stopped, its current position reverts to zero. After that, if the Start method is called with VT_EMPTY for the starting position, playback starts from the beginning of the presentation.
While the source is stopped, no streams produce data.
-
Pauses all active streams in the media source.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid state transition. The media source must be in the started state. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
The media source must be in the started state. The method fails if the media source is paused or stopped.
While the source is paused, calls to
Not every media source can pause. If a media source can pause, the
Shuts down the media source and releases the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application creates the media source, either directly or through the source resolver, the application is responsible for calling Shutdown to avoid memory or resource leaks.
After this method is called, methods on the media source and all of its media streams return
Extends the
To get a reference to this interface, call QueryInterface on the media source.
-Implementations of this interface can return E_NOTIMPL for any methods that are not required by the media source.
-Gets an attribute store for the media source.
-Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
-Gets an attribute store for the media source.
-Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Use the
Gets an attribute store for a stream on the media source.
-The identifier of the stream. To get the identifier, call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support stream-level attributes. |
| Invalid stream identifier. |
?
Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
-A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Provides functionality for the Media Source Extension (MSE).
- Media Source Extensions (MSE) is a World Wide Web Consortium (W3C) standard that extends the HTML5 media elements to enable dynamically changing the media stream without the use of plug-ins. The
The MSE media source keeps track of the ready state of the of the source as well as a list of
Gets the collection of source buffers associated with this media source.
-Gets the source buffers that are actively supplying media data to the media source.
-Gets the ready state of the media source.
-Gets or sets the duration of the media source in 100-nanosecond units.
-Indicate that the end of the media stream has been reached.
-Gets the collection of source buffers associated with this media source.
-The collection of source buffers.
Gets the source buffers that are actively supplying media data to the media source.
-The list of active source buffers.
Gets the ready state of the media source.
-The ready state of the media source.
Gets the duration of the media source in 100-nanosecond units.
-The duration of the media source in 100-nanosecond units.
Sets the duration of the media source in 100-nanosecond units.
-The duration of the media source in 100-nanosecond units.
If this method succeeds, it returns
Adds a
If this method succeeds, it returns
Removes the specified source buffer from the collection of source buffers managed by the
If this method succeeds, it returns
Indicate that the end of the media stream has been reached.
-Used to pass error information.
If this method succeeds, it returns
Gets a value that indicates if the specified MIME type is supported by the media source.
-The media type to check support for.
true if the media type is supported; otherwise, false.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the media source has opened.
-Used to indicate that the media source has ended.
-Used to indicate that the media source has closed.
-
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
-
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables an application to get a topology from the sequencer source. This interface is exposed by the sequencer source object.
-
Returns a topology for a media source that builds an internal topology.
-A reference to the
Receives a reference to the topology's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. For example, a |
?
Represents one stream in a media source.
-Streams are created when a media source is started. For each stream, the media source sends an
Retrieves a reference to the media source that created this media stream.
-
Retrieves a stream descriptor for this media stream.
-Do not modify the stream descriptor. To change the presentation, call
Retrieves a reference to the media source that created this media stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Retrieves a stream descriptor for this media stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Do not modify the stream descriptor. To change the presentation, call
Requests a sample from the media source.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The end of the stream was reached. |
| The media source is stopped. |
| The source's Shutdown method has been called. |
?
If pToken is not
When the next sample is available, the media stream stream does the following:
If the media stream cannot fulfill the caller's request for a sample, it simply releases the token object and skips steps 2 and 3.
The caller should monitor the reference count on the request token. If the media stream sends an
Because the Media Foundation pipeline is multithreaded, the source's RequestSample method might get called after the source has stopped. If the media source is stopped, the method should return
If the media source is paused, the method succeeds, but the stream does not deliver the sample until the source is started again.
If a media source enounters an error asynchronously while processing data, it should signal the error in one of the following ways (but not both):
Represents a request for a sample from a MediaStreamSource.
-MFMediaStreamSourceSampleRequest is implemented by the Windows.Media.Core.MediaStreamSourceSampleRequest runtime class.
-Sets the sample for the media stream source.
-Sets the sample for the media stream source.
-The sample for the media stream source.
If this method succeeds, it returns
Represents a list of time ranges, where each range is defined by a start and end time.
-The
Several
Gets the number of time ranges contained in the object.
-This method corresponds to the TimeRanges.length attribute in HTML5.
-Gets the number of time ranges contained in the object.
-Returns the number of time ranges.
This method corresponds to the TimeRanges.length attribute in HTML5.
-Gets the start time for a specified time range.
-The zero-based index of the time range to query. To get the number of time ranges, call
Receives the start time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.start method in HTML5.
-Gets the end time for a specified time range.
-The zero-based index of the time range to query. To get the number of time ranges, call
Receives the end time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.end method in HTML5.
-Queries whether a specified time falls within any of the time ranges.
-The time, in seconds.
Returns TRUE if any time range contained in this object spans the value of the time parameter. Otherwise, returns
This method returns TRUE if the following condition holds for any time range in the list:
Adds a new range to the list of time ranges.
-The start time, in seconds.
The end time, in seconds.
If this method succeeds, it returns
If the new range intersects a range already in the list, the two ranges are combined. Otherwise, the new range is added to the list.
-Clears the list of time ranges.
-If this method succeeds, it returns
Represents a description of a media format.
- To create a new media type, call
All of the information in a media type is stored as attributes. To clone a media type, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
- This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
- This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
-Receives the major type
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The major type is not set. |
?
This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
-Receives a Boolean value. The value is TRUE if the format uses temporal compression, or
If this method succeeds, it returns
This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares two media types and determines whether they are identical. If they are not identical, the method indicates how the two formats differ.
-Pointer to the
Receives a bitwise OR of zero or more flags, indicating the degree of similarity between the two media types. The following flags are defined.
Value | Meaning |
---|---|
| The major types are the same. The major type is specified by the |
| The subtypes are the same, or neither media type has a subtype. The subtype is specified by the |
| The attributes in one of the media types are a subset of the attributes in the other, and the values of these attributes match, excluding the value of the Specifically, the method takes the media type with the smaller number of attributes and checks whether each attribute from that type is present in the other media type and has the same value (not including To perform other comparisons, use the |
| The user data is identical, or neither media type contains user data. User data is specified by the |
?
The method returns an
Return code | Description |
---|---|
| The types are not equal. Examine the pdwFlags parameter to determine how the types differ. |
| The types are equal. |
| One or both media types are invalid. |
?
Both of the media types must have a major type, or the method returns E_INVALIDARG.
If the method succeeds and all of the comparison flags are set in pdwFlags, the return value is
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The media type is created without any attributes.
-Applies to: desktop apps | Metro style apps
Converts a Media Foundation audio media type to a
Receives the size of the
Contains a flag from the
If the wFormatTag member of the returned structure is
Gets and sets media types on an object, such as a media source or media sink.
-This interface is exposed by media-type handlers.
If you are implementing a custom media source or media sink, you can create a simple media-type handler by calling
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
- To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
-The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the object supports a specified media type.
- Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this media type. |
?
If the object supports the media type given in pMediaType, the method returns
The ppMediaType parameter is optional. If the method fails, the object might use ppMediaType to return a media type that the object does support, and which closely matches the one given in pMediaType. The method is not guaranteed to return a media type in ppMediaType. If no type is returned, this parameter receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
-Receives the number of media types in the list.
If this method succeeds, it returns
To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
- Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the object's media type.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid request. |
?
For media sources, setting the media type means the source will generate data that conforms to that media type. For media sinks, setting the media type means the sink can receive data that conforms to that media type.
Any implementation of this method should check whether pMediaType differs from the object's current media type. If the types are identical, the method should return
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No media type is set. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
-Receives a
If this method succeeds, it returns
The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
- Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Manages metadata for an object. Metadata is information that describes a media file, stream, or other content. Metadata consists of individual properties, where each property contains a descriptive name and a value. A property may be associated with a particular language.
To get this interface from a media source, use the
Gets a list of the languages in which metadata is available.
-For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Gets a list of all the metadata property names on this object.
-Sets the language for setting and retrieving metadata.
-Pointer to a null-terminated string containing an RFC 1766-compliant language tag.
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
-Gets the current language setting.
-Receives a reference to a null-terminated string containing an RFC 1766-compliant language tag. The caller must release the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The metadata provider does not support multiple languages. |
| No language was set. |
?
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages."
The
Gets a list of the languages in which metadata is available.
- A reference to a
The returned
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Sets the value of a metadata property.
-Pointer to a null-terminated string containing the name of the property.
Pointer to a
If this method succeeds, it returns
Gets the value of a metadata property.
- A reference to a null-terminated string that containings the name of the property. To get the list of property names, call
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested property was not found. |
?
Deletes a metadata property.
-Pointer to a null-terminated string containing the name of the property.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The property was not found. |
?
For a media source, deleting a property from the metadata collection does not change the original content.
-Gets a list of all the metadata property names on this object.
-Pointer to a
If this method succeeds, it returns
Gets metadata from a media source or other object.
If a media source supports this interface, it must expose the interface as a service. To get a reference to this interface from a media source, call
Use this interface to get a reference to the
Gets a collection of metadata, either for an entire presentation, or for one stream in the presentation.
- Pointer to the
If this parameter is zero, the method retrieves metadata that applies to the entire presentation. Otherwise, this parameter specifies a stream identifier, and the method retrieves metadata for that stream. To get the stream identifier for a stream, call
Reserved. Must be zero.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No metadata is available for the requested stream or presentation. |
?
Contains data that is needed to implement the
Any custom implementation of the
Receives state-change notifications from the presentation clock.
-To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The stream specified substream index is invalid. Call GetStreamCount to get the number of substreams managed by the multiplexed media source. |
?
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
Retrieves the user name.
-If the user name is not available, the method might succeed and set *pcbData to zero.
-
Sets the user name.
-Pointer to a buffer that contains the user name. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the user name is encrypted. Otherwise, the user name is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the password.
-Pointer to a buffer that contains the password. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the password is encrypted. Otherwise, the password is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the user name.
-Pointer to a buffer that receives the user name. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the user name is not available, the method might succeed and set *pcbData to zero.
-
Retrieves the password.
-Pointer to a buffer that receives the password. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the password is not available, the method might succeed and set *pcbData to zero.
-
Queries whether logged-on credentials should be used.
-Receives a Boolean value. If logged-on credentials should be used, the value is TRUE. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets credentials from the credential cache.
This interface is implemented by the credential cache object. Applications that implement the
Retrieves the credential object for the specified URL.
-A null-terminated wide-character string containing the URL for which the credential is needed.
A null-terminated wide-character string containing the realm for the authentication.
Bitwise OR of zero or more flags from the
Receives a reference to the
Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Reports whether the credential object provided successfully passed the authentication challenge.
-Pointer to the
TRUE if the credential object succeeded in the authentication challenge; otherwise,
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called by the network source into the credential manager.
-
Specifies how user credentials are stored.
-Pointer to the
Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If no flags are specified, the credentials are cached in memory. This method can be implemented by the credential manager and called by the network source.
-Implemented by applications to provide user credentials for a network source.
To use this interface, implement it in your application. Then create a property store object and set the MFNETSOURCE_CREDENTIAL_MANAGER property. The value of the property is a reference to your application's
Media Foundation does not provide a default implementation of this interface. Applications that support authentication must implement this interface.
-
Begins an asynchronous request to retrieve the user's credentials.
-Pointer to an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Completes an asynchronous request to retrieve the user's credentials.
-Pointer to an
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies whether the user's credentials succeeded in the authentication challenge. The network source calls this method to informs the application whether the user's credentials were authenticated.
-Pointer to the
Boolean value. The value is TRUE if the credentials succeeded in the authentication challenge. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the proxy to use when connecting to a server. The network source uses this interface.
Applications can create the proxy locator configured by the application by implementing the
To create the default proxy locator, call
Initializes the proxy locator object.
-Null-terminated wide-character string containing the hostname of the destination server.
Null-terminated wide-character string containing the destination URL.
Reserved. Set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the next proxy to use.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are no more proxy objects. |
?
Keeps a record of the success or failure of using the current proxy.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current proxy information including hostname and port.
-Pointer to a buffer that receives a null-terminated string containing the proxy hostname and port. This parameter can be
On input, specifies the number of elements in the pszStr array. On output, receives the required size of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer specified in pszStr is too small. |
?
Creates a new instance of the default proxy locator.
-Receives a reference to the new proxy locator object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an
Creates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the application when a byte stream requests a URL, and enables the application to block URL redirection.
-To set the callback interface:
Called when the byte stream redirects to a URL.
-The URL to which the connection has been redirected.
To cancel the redirection, set this parameter to VARIANT_TRUE. To allow the redirection, set this parameter to VARIANT_FALSE.
If this method succeeds, it returns
Called when the byte stream requests a URL.
-The URL that the byte stream is requesting.
If this method succeeds, it returns
Retrieves the number of protocols supported by the network scheme plug-in.
-
Retrieves the number of protocols supported by the network scheme plug-in.
-
Retrieves the number of protocols supported by the network scheme plug-in.
-Receives the number of protocols.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a supported protocol by index
-Zero-based index of the protocol to retrieve. To get the number of supported protocols, call
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value passed in the nProtocolIndex parameter was greater than the total number of supported protocols, returned by GetNumberOfSupportedProtocols. |
?
Not implemented in this release.
-This method returns
Marshals an interface reference to and from a stream.
Stream objects that support
Stores the data needed to marshal an interface across a process boundary.
-Interface identifier of the interface to marshal.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Marshals an interface from data stored in the stream.
-Interface identifier of the interface to marshal.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates a usage policy from an input trust authority (ITA). Output trust authorities (OTAs) use this interface to query which protection systems they are required to enforce by the ITA.
-
Retrieives a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
-Retrieves a list of the output protection systems that the output trust authority (OTA) must enforce, along with configuration data for each protection system.
-Describes the output that is represented by the OTA calling this method. This value is a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| Hardware bus. |
| The output sends compressed data. If this flag is absent, the output sends uncompressed data. |
| Reserved. Do not use. |
| The output sends a digital signal. If this flag is absent, the output sends an analog signal. |
| Reserved. Do not use. |
| Reserved. Do not use. |
| The output sends video data. If this flag is absent, the output sends audio data. |
?
Indicates a specific family of output connectors that is represented by the OTA calling this method. Possible values include the following.
Value | Meaning |
---|---|
| AGP bus. |
| Component video. |
| Composite video. |
| Japanese D connector. (Connector conforming to the EIAJ RC-5237 standard.) |
| Embedded DisplayPort connector. |
| External DisplayPort connector. |
| Digital video interface (DVI) connector. |
| High-definition multimedia interface (HDMI) connector. |
| Low voltage differential signaling (LVDS) connector. A connector using the LVDS interface to connect internally to a display device. The connection between the graphics adapter and the display device is permanent and not accessible to the user. Applications should not enable High-Bandwidth Digital Content Protection (HDCP) for this connector. |
| PCI bus. |
| PCI Express bus. |
| PCI-X bus. |
| Audio data sent over a connector via S/PDIF. |
| Serial digital interface connector. |
| S-Video connector. |
| Embedded Unified Display Interface (UDI). |
| External UDI. |
| Unknown connector type. See Remarks. |
| VGA connector. |
| Miracast wireless connector. Supported in Windows?8.1 and later. |
?
Pointer to an array of
Number of elements in the rgGuidProtectionSchemasSupported array.
Receives a reference to the
If this method succeeds, it returns
The video OTA returns the MFCONNECTOR_UNKNOWN connector type unless the Direct3D device is in full-screen mode. (Direct3D windowed mode is not generally a secure video mode.) You can override this behavior by implementing a custom EVR presenter that implements the
Retrieives a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
-Receives the minimum GRL version.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates information about an output protection system and its corresponding configuration data.
-If the configuration information for the output protection system does not require more than a DWORD of space, the configuration information is retrieved in the GetConfigurationData method. If more than a DWORD of configuration information is needed, it is stored using the
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
-
Retrieves a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Receives the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
-Receives the configuration data. The meaning of this data depends on the output protection system.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-Encapsulates the functionality of one or more output protection systems that a trusted output supports. This interface is exposed by output trust authority (OTA) objects. Each OTA represents a single action that the trusted output can perform, such as play, copy, or transcode. An OTA can represent more than one physical output if each output performs the same action.
-
Retrieves the action that is performed by this output trust authority (OTA).
-
Retrieves the action that is performed by this output trust authority (OTA).
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
To get a reference to this interface, call
Media Foundation provides a set of built-in media sources and decoders. Applications can enumerate them as follows:
Applications might also enumerate these objects indirectly. For example, if an application uses the topology loader to resolve a partial topology, the topology loader calls
Third parties can implement their own custom media sources and decoders, and register them for enumeration so that other applications can use them.
To control the enumeration order, Media Foundation maintains two process-wide lists of CLSIDs: a preferred list and a blocked list. An object whose CLSID appears in the preferred list appears first in the enumeration order. An object whose CLSID appears on the blocked list is not enumerated.
The lists are initially populated from the registry. Applications can use the
The preferred list contains a set of key/value pairs, where the keys are strings and the values are CLSIDs. These key/value pairs are defined as follows:
The following examples show the various types of key:
To search the preferred list by key name, call the
The blocked list contains a list of CLSIDs. To enumerate the entire list, call the
Searches the preferred list for a class identifier (CLSID) that matches a specified key name.
-Member of the
The key name to match. For more information about the format of key names, see the Remarks section of
Receives a CLSID from the preferred list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No CLSID matching this key was found. |
?
Gets a class identifier (CLSID) from the preferred list, specified by index value.
-Member of the
The zero-based index of the CLSID to retrieve.
Receives the key name associated with the CLSID. The caller must free the memory for the returned string by calling the CoTaskMemFree function. For more information about the format of key names, see the Remarks section of
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the preferred list or removes a CLSID from the list.
-Member of the
The key name for the CLSID. For more information about the format of key names, see the Remarks section of
The CLSID to add to the list. If this parameter is
If this method succeeds, it returns
The preferred list is global to the caller's process. Calling this method does not affect the list in other process.
-Queries whether a class identifier (CLSID) appears in the blocked list.
-Member of the
The CLSID to search for.
The method returns an
Return code | Description |
---|---|
| The specified CLSID appears in the blocked list. |
| Invalid argument. |
| The specified CLSID is not in the blocked list. |
?
Gets a class identifier (CLSID) from the blocked list.
-Member of the
The zero-based index of the CLSID to retrieve.
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the blocked list, or removes a CLSID from the list.
-Member of the
The CLSID to add or remove.
Specifies whether to add or remove the CSLID. If the value is TRUE, the method adds the CLSID to the blocked list. Otherwise, the method removes it from the list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
The blocked list is global to the caller's process. Calling this method does not affect the list in other processes.
-Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
This interface extends the
To get a reference to this interface, call
Sets the policy for which media sources and transforms are enumerated.
-Sets the policy for which media sources and transforms are enumerated.
-A value from the
If this method succeeds, it returns
Note??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Represents a media item. A media item is an abstraction for a source of media data, such as a video file. Use this interface to get information about the source, or to change certain playback settings, such as the start and stop times. To get a reference to this interface, call one of the following methods:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
-The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
-You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyImportant??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the URL that was used to create the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is associated with this media item. |
| The |
?
This method applies when the application calls
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media item was created from a URL, not from an object. |
| The |
?
The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
-If this method succeeds, it returns
You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stores an application-defined value in the media item.
-This method can return one of these values.
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the start and stop times for the media item.
-If this method succeeds, it returns
The pguidStartPositionType and pguidStopPositionType parameters receive the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
|
?
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the start and stop time for the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid start or stop time. Any of the following can cause this error:
|
?
By default, a media item plays from the beginning to the end of the file. This method adjusts the start time and/or the stop time:
The pguidStartPositionType and pguidStopPositionType parameters give the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
To clear a previously set time, use an empty |
?
The adjusted start and stop times are used the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains a video stream.
-If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains an audio stream.
-If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyIf this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the duration of the media item.
-If this method succeeds, it returns
The method returns the total duration of the content, regardless of any values set through
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether a stream is selected to play.
-If this method succeeds, it returns
To select or deselect a stream, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Selects or deselects a stream.
-If this method succeeds, it returns
You can use this method to change which streams are selected. The change goes into effect the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a stream attribute.
-If this method succeeds, it returns
Stream attributes describe an individual stream (audio, video, or other) within the presentation. To get an attribute that applies to the entire presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a presentation attribute.
-If this method succeeds, it returns
Presentation attributes describe the presentation as a whole. To get an attribute that applies to an individual stream within the presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets a media sink for the media item. A media sink is an object that consumes the data from one or more streams.
-If this method succeeds, it returns
By default, the MFPlay player object renders audio streams to the Streaming Audio Renderer (SAR) and video streams to the Enhanced Video Renderer (EVR). You can use the SetStreamSink method to provide a different media sink for an audio or video stream; or to support other stream types besides audio and video. You can also use it to configure the SAR or EVR before they are used.
Call this method before calling
To reset the media item to use the default media sink, set pMediaSink to
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains methods to play media files.
The MFPlay player object exposes this interface. To get a reference to this interface, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
-This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
-The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
-The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Starts playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Pauses playback. While playback is paused, the most recent video frame is displayed, and audio is silent.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stops playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The current media item is still valid. After playback stops, the playback position resets to the beginning of the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Steps forward one video frame.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot frame step. Reasons for this error code include:
|
| The object's Shutdown method was called. |
| The media source does not support frame stepping, or the current playback rate is negative. |
?
This method completes asynchronously. When the operation completes, the application's
The player object does not support frame stepping during reverse playback (that is, while the playback rate is negative).
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback position.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The value of pvPositionValue is not valid. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
If you call this method while playback is stopped, the new position takes effect after playback resumes.
This method completes asynchronously. When the operation completes, the application's
If playback was started before SetPosition is called, playback resumes at the new position. If playback was paused, the video is refreshed to display the current frame at the new position.
If you make two consecutive calls to SetPosition with guidPositionType equal to MFP_POSITIONTYPE_100NS, and the second call is made before the first call has completed, the second call supersedes the first. The status code for the superseded call is set to S_FALSE in the event data for that call. This behavior prevents excessive latency from repeated calls to SetPosition, as each call may force the media source to perform a relatively lengthy seek operation.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback position.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
The playback position is calculated relative to the start time of the media item, which can be specified by calling
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the playback duration of the current media item.
-This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The media source does not have a duration. This error can occur with a live source, such as a video camera. |
| There is no current media item. |
?
This method calculates the playback duration, taking into account the start and stop times for the media item. To set the start and stop times, call
For example, suppose that you load a 30-second audio file and set the start time equal to 2 seconds and stop time equal to 10 seconds. The
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback rate.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flRate parameter is zero. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The method sets the nearest supported rate, which will depend on the underlying media source. For example, if flRate is 50 and the source's maximum rate is 8? normal rate, the method will set the rate to 8.0. The actual rate is indicated in the event data for the
To find the range of supported rates, call
This method does not support playback rates of zero, although Media Foundation defines a meaning for zero rates in some other contexts.
The new rate applies only to the current media item. Setting a new media item resets the playback rate to 1.0.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of supported playback rates.
-This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not support playback in the requested direction (either forward or reverse). |
?
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 indicates normal playback speed, 0.5 indicates half speed, and 2.0 indicates twice speed. Positive values indicate forward playback, and negative values indicate reverse playback. -
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
-If this method succeeds, it returns
This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from a URL.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
| Unsupported protocol. |
?
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromURL method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromURL, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
Currently, this method returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from an object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
?
The pIUnknownObj parameter must specify one of the following:
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromObject method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromObject, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queues a media item for playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The media item contains protected content. MFPlay currently does not support protected content. |
| No audio playback device was found. This error can occur if the media source contains audio, but no audio playback devices are available on the system. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
To create a media item, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Clears the current media item.
Note??This method is currently not implemented.? -If this method succeeds, it returns
This method stops playback and releases the player object's references to the current media item.
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no current media item. |
| There is no current media item. |
| The object's Shutdown method was called. |
?
The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio volume.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flVolume parameter is invalid. |
?
If you call this method before playback starts, the setting is applied after playback starts.
This method does not change the master volume level for the player's audio session. Instead, it adjusts the per-channel volume levels for audio stream(s) that belong to the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio balance.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flBalance parameter is invalid. |
?
If you call this method before playback starts, the setting is applied when playback starts.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Mutes or unmutes the audio.
-If this method succeeds, it returns
If you call this method before playback starts, the setting is applied after playback starts.
This method does not mute the entire audio session to which the player belongs. It mutes only the streams from the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session. -
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the size and aspect ratio of the video. These values are computed before any scaling is done to fit the video into the destination window.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of video sizes that can be displayed without significantly degrading performance or image quality.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the video source rectangle.
MFPlay clips the video to this rectangle and stretches the rectangle to fill the video window.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
MFPlay stretches the source rectangle to fill the entire video window. By default, MFPlay maintains the source's correct aspect ratio, letterboxing if needed. The letterbox color is controlled by the
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the video position before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies whether the aspect ratio of the video is preserved during playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the aspect-ratio mode before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
-If this method succeeds, it returns
The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Updates the video frame.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Call this method when your application's video playback window receives either a WM_PAINT or WM_SIZE message. This method performs two functions:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the color for the video border. The border color is used to letterbox the video.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the border color before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Applies an audio or video effect to playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This effect was already added. |
?
The object specified in the pEffect parameter can implement either a video effect or an audio effect. The effect is applied to any media items set after the method is called. It is not applied to the current media item.
For each media item, the effect is applied to the first selected stream of the matching type (audio or video). If a media item has two selected streams of the same type, the second stream does not receive the effect. The effect is ignored if the media item does not contain a stream that matches the effect type. For example, if you set a video effect and play a file that contains just audio, the video effect is ignored, although no error is raised.
The effect is applied to all subsequent media items, until the application removes the effect. To remove an effect, call
If you set multiple effects of the same type (audio or video), they are applied in the same order in which you call InsertEffect.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes an effect that was added with the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The effect was not found. |
?
The change applies to the next media item that is set on the player. The effect is not removed from the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes all effects that were added with the
If this method succeeds, it returns
The change applies to the next media item that is set on the player. The effects are not removed from the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Shuts down the MFPlay player object and releases any resources the object is using.
-If this method succeeds, it returns
After this method is called, most
The player object automatically shuts itself down when its reference count reaches zero. You can use the Shutdown method to shut down the player before all of the references have been released.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Callback interface for the
To set the callback, pass an
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Called by the MFPlay player object to notify the application of a playback event.
- The specific type of playback event is given in the eEventType member of the
It is safe to call
Enables a media source to receive a reference to the
If a media source exposes this interface, the Protected Media Path (PMP) Media Session calls SetPMPHost with a reference to the
Provides a reference to the
The
Provides a reference to the
If this method succeeds, it returns
The
Provides a mechanism for a media source to implement content protection functionality in a Windows Store apps.
-When to implement: A media source implements
Sets a reference to the
Sets a reference to the
If this method succeeds, it returns
Enables a media source in the application process to create objects in the protected media path (PMP) process.
-This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Blocks the protected media path (PMP) process from ending.
-If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates an object in the protect media path (PMP) process, from a CLSID.
-The CLSID of the object to create.
A reference to the
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
You can use the pStream parameter to initialize the object after it is created.
-Allows a media source to create a Windows Runtime object in the Protected Media Path (PMP) process.
-Blocks the protected media path (PMP) process from ending.
-If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates a Windows Runtime object in the protected media path (PMP) process.
-Id of object to create.
Data to be passed to the object by way of a IPersistStream.
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the created object.
If this method succeeds, it returns
Enables two instances of the Media Session to share the same protected media path (PMP) process.
-If your application creates more than one instance of the Media Session, you can use this interface to share the same PMP process among several instances. This can be more efficient than re-creating the PMP process each time.
Use this interface as follows:
Blocks the protected media path (PMP) process from ending.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an object in the protected media path (PMP) process.
-CLSID of the object to create.
Interface identifier of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a presentation clock, which is used to schedule when samples are rendered and to synchronize multiple streams.
-To create a new instance of the presentation clock, call the
To get the presentation clock from the Media Session, call
Retrieves the clock's presentation time source.
-Retrieves the latest clock time.
-This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
-
Sets the time source for the presentation clock. The time source is the object that drives the clock by providing the current time.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The time source does not have a frequency of 10 MHz. |
| The time source has not been initialized. |
?
The presentation clock cannot start until it has a time source.
The time source is automatically registered to receive state change notifications from the clock, through the time source's
This time source have a frequency of 10 MHz. See
Retrieves the clock's presentation time source.
-Receives a reference to the time source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
Retrieves the latest clock time.
-Receives the latest clock time, in 100-nanosecond units. The time is relative to when the clock was last started.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a presentation time source. Call |
?
This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
-
Registers an object to be notified whenever the clock starts, stops, or pauses, or changes rate.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before releasing the object, call
Unregisters an object that is receiving state-change notifications from the clock.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the presentation clock.
-Initial starting time, in 100-nanosecond units. At the time the Start method is called, the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
This method is valid in all states (stopped, paused, or running).
If the clock is paused and restarted from the same position (llClockStartOffset is PRESENTATION_CURRENT_POSITION), the presentation clock sends an
The presentation clock initiates the state change by calling OnClockStart or OnClockRestart on the clock's time source. This call is made synchronously. If it fails, the state change does not occur. If the call succeeds, the state changes, and the clock notifies the other state-change subscribers by calling their OnClockStart or OnClockRestart methods. These calls are made asynchronously.
If the clock is already running, calling Start again has the effect of seeking the clock to the new StartOffset position.
-
Stops the presentation clock. While the clock is stopped, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already stopped. |
?
This method is valid when the clock is running or paused.
The presentation clock initiates the state change by calling
Pauses the presentation clock. While the clock is paused, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already paused. |
| The clock is stopped. This request is not valid when the clock is stopped. |
?
This method is valid when the clock is running. It is not valid when the clock is paused or stopped.
The presentation clock initiates the state change by calling
Describes the details of a presentation. A presentation is a set of related media streams that share a common presentation time.
-Presentation descriptors are used to configure media sources and some media sinks. To get the presentation descriptor from a media source, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
-Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Selects a stream in the presentation.
-The stream number to select, indexed from zero. To find the number of streams in the presentation, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is selected, the media source will generate data for that stream. The media source will not generated data for deselected streams. To deselect a stream, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Deselects a stream in the presentation.
- The stream number to deselect, indexed from zero. To find the number of streams in the presentation, call the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is deselected, no data is generated for that stream. To select the stream again, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a copy of this presentation descriptor.
-Receives a reference to the
If this method succeeds, it returns
This method performs a shallow copy of the presentation descriptor. The stream descriptors are not cloned. Therefore, use caution when modifying the presentation presentation descriptor or its stream descriptors.
If the original presentation descriptor is from a media source, do not modify the presentation descriptor unless the source is stopped. If you use the presentation descriptor to configure a media sink, do not modify the presentation descriptor after the sink is configured.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
-Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides the clock times for the presentation clock.
-This interface is implemented by presentation time sources. A presentation time source is an object that provides the clock time for the presentation clock. For example, the audio renderer is a presentation time source. The rate at which the audio renderer consumes audio samples determines the clock time. If the audio format is 44100 samples per second, the audio renderer will report that one second has passed for every 44100 audio samples it plays. In this case, the timing is provided by the sound card.
To set the presentation time source on the presentation clock, call
A presentation time source must also implement the
Media Foundation provides a presentation time source that is based on the system clock. To create this object, call the
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
-A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
-Receives a reference to the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This time source does not expose an underlying clock. |
?
A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Provides a method that allows content protection systems to perform a handshake with the protected environment. This is needed because the CreateFile and DeviceIoControl APIs are not available to Windows Store apps.
-See
Allows content protection systems to access the protected environment.
-The length in bytes of the input data.
A reference to the input data.
The length in bytes of the output data.
A reference to the output data.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Gets the Global Revocation List (GLR).
-The length of the data returned in output.
Receives the contents of the global revocation list file.
If this method succeeds, it returns
Allows reading of the system Global Revocation List (GRL).
-Enables the quality manager to adjust the audio or video quality of a component in the pipeline.
This interface is exposed by pipeline components that can adjust their quality. Typically it is exposed by decoders and stream sinks. For example, the enhanced video renderer (EVR) implements this interface. However, media sources can also implement this interface.
To get a reference to this interface from a media source, call
The quality manager typically obtains this interface when the quality manager's
Retrieves the current drop mode.
-
Retrieves the current quality level.
-
Sets the drop mode. In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode.
-Requested drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified mode or any higher modes. |
?
If this method is called on a media source, the media source might switch between thinned and non-thinned output. If that occurs, the affected streams will send an
Sets the quality level. The quality level determines how the component consumes or produces samples.
-Requested quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified quality level or any levels below it. |
?
Retrieves the current drop mode.
-Receives the drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current quality level.
-Receives the quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Drops samples over a specified interval of time.
-Amount of time to drop, in 100-nanosecond units. This value is always absolute. If the method is called multiple times, do not add the times from previous calls.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this method. |
?
Ideally the quality manager can prevent a renderer from falling behind. But if this does occur, then simply lowering quality does not guarantee the renderer will ever catch up. As a result, audio and video might fall out of sync. To correct this problem, the quality manager can call DropTime to request that the renderer drop samples quickly over a specified time interval. After that period, the renderer stops dropping samples.
This method is primarily intended for the video renderer. Dropped audio samples cause audio glitching, which is not desirable.
If a component does not support this method, it should return
Enables a pipeline object to adjust its own audio or video quality, in response to quality messages.
-This interface enables a pipeline object to respond to quality messages from the media sink. Currently, it is supported only for video decoders.
If a video decoder exposes
If the decoder exposes
The preceding remarks apply to the default implementation of the quality manager; custom quality managers can implement other behaviors.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Forwards an
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries an object for the number of quality modes it supports. Quality modes are used to adjust the trade-off between quality and speed when rendering audio or video.
The default presenter for the enhanced video renderer (EVR) implements this interface. The EVR uses the interface to respond to quality messages from the quality manager.
-Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
-To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
-To get the current quality level, call the
Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
-Receives the maximum drop mode, specified as a member of the
If this method succeeds, it returns
To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
-Receives the minimum quality level, specified as a member of the
If this method succeeds, it returns
To get the current quality level, call the
Adjusts playback quality. This interface is exposed by the quality manager.
-Media Foundation provides a default quality manager that is tuned for playback. Applications can provide a custom quality manager to the Media Session by setting the
Called when the Media Session is about to start playing a new topology.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a typical quality manager this method does the following:
Enumerates the nodes in the topology.
Calls
Queries for the
The quality manager can then use the
Called when the Media Session selects a presentation clock.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the media processor is about to deliver an input sample to a pipeline component.
-Pointer to the
Index of the input stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
-
Called after the media processor gets an output sample from a pipeline component.
-Pointer to the
Index of the output stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
-
Called when a pipeline component sends an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the Media Session is shutting down.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The quality manager should release all references to the Media Session when this method is called.
-Gets or sets the playback rate.
-Objects can expose this interface as a service. To obtain a reference to the interface, call
For more information, see About Rate Control.
To discover the playback rates that an object supports, use the
Sets the playback rate.
-If TRUE, the media streams are thinned. Otherwise, the stream is not thinned. For media sources and demultiplexers, the object must thin the streams when this parameter is TRUE. For downstream transforms, such as decoders and multiplexers, this parameter is informative; it notifies the object that the input streams are thinned. For information, see About Rate Control.
The requested playback rate. Postive values indicate forward playback, negative values indicate reverse playback, and zero indicates scrubbing (the source delivers a single frame).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the requested playback rate. |
| The object cannot change to the new rate while in the running state. |
?
The Media Session prevents some transitions between rate boundaries, depending on the current playback state:
Playback State | Forward/Reverse | Forward/Zero | Reverse/Zero |
---|---|---|---|
Running | No | No | No |
Paused | No | Yes | No |
Stopped | Yes | Yes | Yes |
?
If the transition is not supported, the method returns
When a media source completes a call to SetRate, it sends the
If a media source switches between thinned and non-thinned playback, the streams send an
When the Media Session completes a call to SetRate, it sends the
Gets the current playback rate.
-Receives the current playback rate.
Receives the value TRUE if the stream is currently being thinned. If the object does not support thinning, this parameter always receives the value
Queries the range of playback rates that are supported, including reverse playback.
To get a reference to this interface, call
Applications can use this interface to discover the fastest and slowest playback rates that are possible, and to query whether a given playback rate is supported. Applications obtain this interface from the Media Session. Internally, the Media Session queries the objects in the pipeline. For more information, see How to Determine Supported Rates.
To get the current playback rate and to change the playback rate, use the
Playback rates are expressed as a ratio the normal playback rate. Reverse playback is expressed as a negative rate. Playback is either thinned or non-thinned. In thinned playback, some of the source data is skipped (typically delta frames). In non-thinned playback, all of the source data is rendered.
You might need to implement this interface if you are writing a pipeline object (media source, transform, or media sink). For more information, see Implementing Rate Control.
-
Retrieves the slowest playback rate supported by the object.
-Specifies whether to query to the slowest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the slowest thinned playback rate. Otherwise, the method retrieves the slowest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the slowest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
The value returned in plfRate represents a lower bound. Playback at this rate is not guaranteed. Call
If eDirection is
Gets the fastest playback rate supported by the object.
-Specifies whether to query to the fastest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the fastest thinned playback rate. Otherwise, the method retrieves the fastest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the fastest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
For some formats (such as ASF), thinning means dropping all frames that are not I-frames. If a component produces stream data, such as a media source or a demultiplexer, it should pay attention to the fThin parameter and return
If the component processes or receives a stream (most transforms or media sinks), it may ignore this parameter if it does not care whether the stream is thinned. In the Media Session's implementation of rate support, if the transforms do not explicitly support reverse playback, the Media Session will attempt to playback in reverse with thinning but not without thinning. Therefore, most applications will set fThin to TRUE when using the Media Session for reverse playback.
If eDirection is
Queries whether the object supports a specified playback rate.
-If TRUE, the method queries whether the object supports the playback rate with thinning. Otherwise, the method queries whether the object supports the playback rate without thinning. For information about thinning, see About Rate Control.
The playback rate to query.
If the object does not support the playback rate given in flRate, this parameter receives the closest supported playback rate. If the method returns
The method returns an
Return code | Description |
---|---|
| The object supports the specified rate. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the specified rate. |
?
Creates an instance of either the sink writer or the source reader.
-To get a reference to this interface, call the CoCreateInstance function. The CLSID is CLSID_MFReadWriteClassFactory. Call the
As an alternative to using this interface, you can call any of the following functions:
Internally, these functions use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates an instance of the sink writer or source reader, given a URL.
-The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A null-terminated string that contains a URL. If clsid is CLSID_MFSinkWriter, the URL specifies the name of the output file. The sink writer creates a new file with this name. If clsid is CLSID_MFSourceReader, the URL specifies the input file for the source reader.
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates an instance of the sink writer or source reader, given an
The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A reference to the
Value | Meaning |
---|---|
Pointer to a byte stream. If clsid is CLSID_MFSinkWriter, the sink writer writes data to this byte stream. If clsid is CLSID_MFSourceReader, this byte stream provides the source data for the source reader. | |
Pointer to a media sink. Applies only when clsid is CLSID_MFSinkWriter. | |
Pointer to a media source. Applies only when clsid is CLSID_MFSourceReader. |
?
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
Any pipeline object that creates worker threads should implement this interface.
-Media Foundation provides a mechanism for applications to associate branches in the topology with MMCSS tasks. A topology branch is defined by a source node in the topology and all of the nodes downstream from it. An application registers a topology branch with MMCSS by setting the
When the application registers a topology branch with MMCSS, the Media Session queries every pipeline object in that branch for the
When the application unregisters the topology branch, the Media Session calls UnregisterThreads.
If a pipeline object creates its own worker threads but does not implement this interface, it can cause priority inversions in the Media Foundation pipeline, because high-priority processing threads might be blocked while waiting for the component to process data on a thread with lower priority.
Pipeline objects that do not create worker threads do not need to implement this interface.
In Windows?8, this interface is extended with
Specifies the work queue for the topology branch that contains this object.
- An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
-The MMCSS task identifier.
The name of the MMCSS task.
If this method succeeds, it returns
The object's worker threads should register themselves with MMCSS by calling AvSetMmThreadCharacteristics, using the task name and identifier specified in this method.
-Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
-If this method succeeds, it returns
The object's worker threads should unregister themselves from MMCSS by calling AvRevertMmThreadCharacteristics.
-Specifies the work queue for the topology branch that contains this object.
-The identifier of the work queue, or the value
If this method succeeds, it returns
An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
This interface is a replacement for the
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
-The MMCSS task identifier. If the value is zero on input, the object should create a new MCCSS task group. See Remarks.
The name of the MMCSS task.
The base priority of the thread.
If this method succeeds, it returns
If the object does not create worker threads, the method should simply return
Otherwise, if the value of *pdwTaskIndex
is zero on input, the object should perform the following steps:
*pdwTaskIndex
equal to the task identifier.If the value of *pdwTaskIndex
is nonzero on input, the parameter contains an existing MMCSS task identifer. In that case, all worker threads of the object should register themselves for that task by calling AvSetMmThreadCharacteristics.
Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
-If this method succeeds, it returns
Specifies the work queue that this object should use for asynchronous work items.
-The work queue identifier.
The base priority for work items.
If this method succeeds, it returns
The object should use the values of dwMultithreadedWorkQueueId and lWorkItemBasePriority when it queues new work items. Use the
Used by the Microsoft Media Foundation proxy/stub DLL to marshal certain asynchronous method calls across process boundaries.
Applications do not use or implement this interface.
-Modifies a topology for use in a Terminal Services environment.
-To use this interface, do the following:
The application must call UpdateTopology before calling
Modifies a topology for use in a Terminal Services environment.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application is running in a Terminal Services client session, call this method before calling
Retrieves a reference to the remote object for which this object is a proxy.
-
Retrieves a reference to the remote object for which this object is a proxy.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the object that is hosting this proxy.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets and retrieves Synchronized Accessible Media Interchange (SAMI) styles on the SAMI Media Source.
- To get a reference to this interface, call
Gets the number of styles defined in the SAMI file.
-Gets a list of the style names defined in the SAMI file.
-Gets the number of styles defined in the SAMI file.
-Receives the number of SAMI styles in the file.
If this method succeeds, it returns
Gets a list of the style names defined in the SAMI file.
-Pointer to a
If this method succeeds, it returns
Sets the current style on the SAMI media source.
-Pointer to a null-terminated string containing the name of the style. To clear the current style, pass an empty string (""). To get the list of style names, call
If this method succeeds, it returns
Gets the current style from the SAMI media source.
-Receives a reference to a null-terminated string that contains the name of the style. If no style is currently set, the method returns an empty string. The caller must free the memory for the string by calling CoTaskMemFree.
If this method succeeds, it returns
Represents a media sample, which is a container object for media data. For video, a sample typically contains one video frame. For audio data, a sample typically contains multiple audio samples, rather than a single sample of audio.
A media sample contains zero or more buffers. Each buffer manages a block of memory, and is represented by the
To create a new media sample, call
When you call CopyAllItems, inherited from the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
-If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To set attibutes on a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
-Receives the presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a presentation time. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the presentation time of the sample.
-The presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some pipeline components require samples that have time stamps. Generally the component that generates the data for the sample also sets the time stamp. The Media Session might modify the time stamps.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
-Receives the duration, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a specified duration. |
?
If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the duration of the sample.
-Duration of the sample, in 100-nanosecond units.
If this method succeeds, it returns
This method succeeds if the duration is negative, although negative durations are probably not valid for most types of data. It is the responsibility of the object that consumes the sample to validate the duration.
The duration can also be zero. This might be valid for some types of data. For example, the sample might contain stream metadata with no buffers.
Until this method is called, the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
-Receives the number of buffers in the sample. A sample might contain zero buffers.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a buffer from the sample, by index.
Note??In most cases, it is safer to use the
A sample might contain more than one buffer. Use the GetBufferByIndex method to enumerate the individual buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a sample with multiple buffers into a sample with a single buffer.
-Receives a reference to the
If the sample contains more than one buffer, this method copies the data from the original buffers into a new buffer, and replaces the original buffer list with the new buffer. The new buffer is returned in the ppBuffer parameter.
If the sample contains a single buffer, this method returns a reference to the original buffer. In typical use, most samples do not contain multiple buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds a buffer to the end of the list of buffers in the sample.
-Pointer to the buffer's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
For uncompressed video data, each buffer should contain a single video frame, and samples should not contain multiple frames. In general, storing multiple buffers in a sample is discouraged.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a buffer at a specified index from the sample.
-Index of the buffer. To find the number of buffers in the sample, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all of the buffers from the sample.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies the sample data to a buffer. This method concatenates the valid data from all of the buffers of the sample, in order.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The buffer is not large enough to contain the data. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Callback interface to get media data from the sample-grabber sink.
-The sample-grabber sink enables an application to get data from the Media Foundation pipeline without implementing a custom media sink. To use the sample-grabber sink, the application must perform the following steps:
Implement the
Call
Create a topology that includes an output node with the sink's
Pass this topology to the Media Session.
During playback, the sample-grabber sink calls methods on the application's callback.
You cannot use the sample-grabber sink to get protected content.
-Extends the
This callback interface is used with the sample-grabber sink. It extends the
The OnProcessSampleEx method adds a parameter that contains the attributes for the media sample. You can use the attributes to get information about the sample, such as field dominance and telecine flags.
To use this interface, do the following:
Begins an asynchronous request to write a media sample to the stream.
-When the sample has been written to the stream, the callback object's
Begins an asynchronous request to write a media sample to the stream.
-A reference to the
A reference to the
A reference to the
If this method succeeds, it returns
When the sample has been written to the stream, the callback object's
Completes an asynchronous request to write a media sample to the stream.
-A reference to the
If this method succeeds, it returns
Call this method when the
Provides encryption for media data inside the protected media path (PMP).
-
Retrieves the version of sample protection that the component implements on input.
-
Retrieves the version of sample protection that the component implements on output.
-
Retrieves the version of sample protection that the component implements on input.
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the version of sample protection that the component implements on output.
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the sample protection certificate.
-Specifies the version number of the sample protection scheme for which to receive a certificate. The version number is specified as a
Receives a reference to a buffer containing the certificate. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
For certain version numbers of sample protection, the downstream component must provide a certificate. Components that do not support these version numbers can return E_NOTIMPL.
-
Retrieves initialization information for sample protection from the upstream component.
-Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the output stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a certificate provided by the downstream component.
Size of the certificate, in bytes.
Receives a reference to a buffer that contains the initialization information for downstream component. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
This method must be implemented by the upstream component. The method fails if the component does not support the requested sample protection version. Downstream components do not implement this method and should return E_NOTIMPL.
-
Initializes sample protection on the downstream component.
-Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the input stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a buffer that contains the initialization data provided by the upstream component. To retrieve this buffer, call
Size of the pbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Persists media data from a source byte stream to an application-provided byte stream.
The byte stream used for HTTP download implements this interface. To get a reference to this interface, call
Retrieves the percentage of content saved to the provided byte stream.
-
Begins saving a Windows Media file to the application's byte stream.
-Pointer to the
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the operation completes, the callback object's
Completes the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Cancels the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the percentage of content saved to the provided byte stream.
-Receives the percentage of completion.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
-The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
- The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot open the URL with the requested access (read or write). |
| Unsupported byte stream type. |
?
The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Completes an asynchronous request to create an object from a URL.
-Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside the
Cancels the current request to create an object from a URL.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
The operation cannot be canceled if BeginCreateObject returns
Establishes a one-way secure channel between two objects.
-
Retrieves the client's certificate.
-Receives a reference to a buffer allocated by the object. The buffer contains the client's certificate. The caller must release the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Passes the encrypted session key to the client.
-Pointer to a buffer that contains the encrypted session key. This parameter can be
Size of the pbEncryptedSessionKey buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a particular seek position, gets the two nearest key frames.
-If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
-For a particular seek position, gets the two nearest key frames.
-A reference to a
The seek position. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The time format specified in pguidTimeFormat is not supported. |
?
If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
-Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Called by the media pipeline to get information about a transform provided by the sensor transform.
-The index of the transform for which information is being requested. In the current release, this value will always be 0.
Gets the identifier for the transform.
The attribute store to be populated.
A collection of
If this method succeeds, it returns
Implemented by the Sequencer Source. The sequencer source enables an application to create a sequence of topologies. To create the sequencer source, call
Adds a topology to the end of the queue.
-Pointer to the
A combination of flags from the
Receives the sequencer element identifier for this topology.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source topology node is missing one of the following attributes: |
?
The sequencer plays topologies in the order they are queued. You can queue as many topologies as you want to preroll.
The application must indicate to the sequencer when it has queued the last topology on the Media Session. To specify the last topology, set the SequencerTopologyFlags_Last flag in the dwFlags parameter when you append the topology. The sequencer uses this information to end playback with the pipeline. Otherwise, the sequencer waits indefinitely for a new topology to be queued.
-
Deletes a topology from the queue.
-The sequencer element identifier of the topology to delete.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Maps a presentation descriptor to its associated sequencer element identifier and the topology it represents.
-Pointer to the
Receives the sequencer element identifier. This value is assigned by the sequencer source when the application calls
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation descriptor is not valid. |
| This segment was canceled. |
?
The topology returned in ppTopology is the original topology that the application specified in AppendTopology. The source nodes in this topology contain references to the native sources. Do not queue this topology on the Media Session. Instead, call
Updates a topology in the queue.
-Sequencer element identifier of the topology to update.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sequencer source has been shut down. |
?
This method is asynchronous. When the operation is completed, the sequencer source sends an
Updates the flags for a topology in the queue.
-Sequencer element identifier of the topology to update.
Bitwise OR of flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Queries an object for a specified service interface.
-A service is an interface that is exposed by one object but might be implemented by another object. The GetService method is equivalent to QueryInterface, with the following difference: when QueryInterface retrieves a reference to an interface, it is guaranteed that you can query the returned interface and get back the original interface. The GetService method does not make this guarantee, because the retrieved interface might be implemented by a separate object.
The
Retrieves a service interface.
-The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
The interface identifier (IID) of the interface being requested.
Receives the interface reference. The caller must release the interface.
Applies to: desktop apps | Metro style apps
Retrieves a service interface.
-The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
Exposed by some Media Foundation objects that must be explicitly shut down.
-The following types of object expose
Any component that creates one of these objects is responsible for calling Shutdown on the object before releasing the object. Typically, applications do not create any of these objects directly, so it is not usually necessary to use this interface in an application.
To obtain a reference to this interface, call QueryInterface on the object.
If you are implementing a custom object, your object can expose this interface, but only if you can guarantee that your application will call Shutdown.
Media sources, media sinks, and synchronous MFTs should not implement this interface, because the Media Foundation pipeline will not call Shutdown on these objects. Asynchronous MFTs must implement this interface.
This interface is not related to the
Some Media Foundation interfaces define a Shutdown method, which serves the same purpose as
Queries the status of an earlier call to the
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Shuts down a Media Foundation object and releases all resources associated with the object.
-If this method succeeds, it returns
The
Queries the status of an earlier call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The Shutdown method has not been called on this object. |
?
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Provides a method that allows content protection systems to get the procedure address of a function in the signed library. This method provides the same functionality as GetProcAddress which is not available to Windows Store apps.
-See
Gets the procedure address of the specified function in the signed library.
-The entry point name in the DLL that specifies the function.
Receives the address of the entry point.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Controls the master volume level of the audio session associated with the streaming audio renderer (SAR) and the audio capture source.
The SAR and the audio capture source expose this interface as a service. To get a reference to the interface, call
To control the volume levels of individual channels, use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
The master volume level of the audio session.
The volume level of the channel.
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuaton for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels. To change the volume level of individual channels, use the
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
-
Retrieves the master volume level.
-If an external event changes the master volume, the audio renderer sends an
Queries whether the audio is muted.
-Calling
Sets the master volume level.
-Volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Events outside of the application can change the master volume level. For example, the user can change the volume from the system volume-control program (SndVol). If an external event changes the master volume, the audio renderer sends an
Retrieves the master volume level.
-Receives the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
If an external event changes the master volume, the audio renderer sends an
Mutes or unmutes the audio.
-Specify TRUE to mute the audio, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
This method does not change the volume level returned by the
Queries whether the audio is muted.
-Receives a Boolean value. If TRUE, the audio is muted; otherwise, the audio is not muted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Calling
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Adds a stream to the sink writer.
-A reference to the
Receives the zero-based index of the new stream.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Sets the input format for a stream on the sink writer.
-The zero-based index of the stream. The index is received by the pdwStreamIndex parameter of the
A reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The underlying media sink does not support the format, no conversion is possible, or a dynamic format change is not possible. |
| The dwStreamIndex parameter is invalid. |
| Could not find an encoder for the encoded format. |
?
The input format does not have to match the target format that is written to the media sink. If the formats do not match, the method attempts to load an encoder that can encode from the input format to the target format.
After streaming begins?that is, after the first call to
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Initializes the sink writer for writing.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
Call this method after you configure the input streams and before you send any data to the sink writer.
You must call BeginWriting before calling any of the following methods:
The underlying media sink must have at least one input stream. Otherwise, BeginWriting returns
If BeginWriting succeeds, any further calls to BeginWriting return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Delivers a sample to the sink writer.
-The zero-based index of the stream for this sample.
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
By default, the sink writer limits the rate of incoming data by blocking the calling thread inside the WriteSample method. This prevents the application from delivering samples too quickly. To disable this behavior, set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Indicates a gap in an input stream.
-The zero-based index of the stream.
The position in the stream where the gap in the data occurs. The value is given in 100-nanosecond units, relative to the start of the stream.
If this method succeeds, it returns
For video, call this method once for each missing frame. For audio, call this method at least once per second during a gap in the audio. Set the
Internally, this method calls
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Places a marker in the specified stream.
-The zero-based index of the stream.
Pointer to an application-defined value. The value of this parameter is returned to the caller in the pvContext parameter of the caller's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
To use this method, you must provide an asynchronous callback when you create the sink writer. Otherwise, the method returns
Markers provide a way to be notified when the media sink consumes all of the samples in a stream up to a certain point. The media sink does not process the marker until it has processed all of the samples that came before the marker. When the media sink processes the marker, the sink writer calls the application's OnMarker method. When the callback is invoked, you know that the sink has consumed all of the previous samples for that stream.
For example, to change the format midstream, call PlaceMarker at the point where the format changes. When OnMarker is called, it is safe to call
Internally, this method calls
Note??The pvContext parameter of the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Notifies the media sink that a stream has reached the end of a segment.
-The zero-based index of a stream, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
This method sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Flushes one or more streams.
-The zero-based index of the stream to flush, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
For each stream that is flushed, the sink writer drops all pending samples, flushes the encoder, and sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Completes all writing operations on the sink writer.
-If this method succeeds, it returns
Call this method after you send all of the input samples to the sink writer. The method performs any operations needed to create the final output from the media sink.
If you provide a callback interface when you create the sink writer, this method completes asynchronously. When the operation completes, the
Internally, this method calls
After this method is called, the following methods will fail:
If you do not call Finalize, the output from the media sink might be incomplete or invalid. For example, required file headers might be missing from the output file.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries the underlying media sink or encoder for an interface.
-The zero-based index of a stream to query, or
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
If the dwStreamIndex parameter equals
If the input and output types of the sink are identical and compressed, it's possible that no encoding is required and the video encoder will not be instantiated. In that case, GetServiceForStream will return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets statistics about the performance of the sink writer.
-The zero-based index of a stream to query, or
A reference to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid stream number. |
?
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Callback interface for the Microsoft Media Foundation sink writer.
-Set the callback reference by setting the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Provides additional functionality on the sink writer for dynamically changing the media type and encoder configuration.
-The Sink Writer implements this interface in Windows?8.1. To get a reference to this interface, call QueryInterface on the
Dynamically changes the target media type that Sink Writer is encoding to.
-Specifies the stream index.
The new media format to encode to.
The new set of encoding parameters to configure the encoder with. If not specified, previously provided parameters will be used.
If this method succeeds, it returns
The new media type must be supported by the media sink being used and by the encoder MFTs installed on the system. -
-Dynamically updates the encoder configuration with a collection of new encoder settings.
-Specifies the stream index.
A set of encoding parameters to configure the encoder with.
If this method succeeds, it returns
The encoder will be configured with these settings after all previously queued input media samples have been sent to it through
Extends the
The Sink Writer implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Sink Writer.
-Gets a reference to a Media Foundation transform (MFT) for a specified stream.
-The zero-based index of a stream.
The zero-based index of the MFT to retreive.
Receives a reference to a
Receives a reference to the
If this method succeeds, it returns
Represents a buffer which contains media data for a
Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
-Gets the buffered time range.
-Gets or sets the timestamp offset for media segments appended to the
Gets or sets the timestamp for the start of the append window.
-Gets or sets the timestamp for the end of the append window.
-Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
-true if Append, AppendByteStream, or Remove; otherwise, false.
Gets the buffered time range.
-The buffered time range.
If this method succeeds, it returns
Gets the timestamp offset for media segments appended to the
The timestamp offset.
Sets the timestamp offset for media segments appended to the
If this method succeeds, it returns
Gets the timestamp for the start of the append window.
-The timestamp for the start of the append window.
Sets the timestamp for the start of the append window.
-The timestamp for the start of the append window.
If this method succeeds, it returns
Gets the timestamp for the end of the append window.
-The timestamp for the end of the append window.
Sets the timestamp for the end of the append window.
-The timestamp for the end of the append window.
Appends the specified media segment to the
If this method succeeds, it returns
Appends the media segment from the specified byte stream to the
If this method succeeds, it returns
Aborts the processing of the current media segment.
-If this method succeeds, it returns
Removes the media segments defined by the specified time range from the
If this method succeeds, it returns
Represents a collection of
Gets the number of
Gets the number of
The number of source buffers in the list.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the source buffer has started updating.
-Used to indicate that the source buffer has been aborted.
-Used to indicate that an error has occurred with the source buffer.
-Used to indicate that the source buffer is updating.
-Used to indicate that the source buffer has finished updating.
-Callback interface to receive notifications from a network source on the progress of an asynchronous open operation.
-
Called by the network source when the open operation begins or ends.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The networks source calls this method with the following event types.
For more information, see How to Get Events from the Network Source.
-Implemented by the Microsoft Media Foundation source reader object.
-To create the source reader, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Queries whether a stream is selected.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives TRUE if the stream is selected and will generate data. Receives
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Selects or deselects one or more streams.
-The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets a format that is supported natively by the media source.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets the current media type for a stream.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
Audio resampling support was added to the source reader with Windows?8. In versions of Windows prior to Windows?8, the source reader does not support audio resampling. If you need to resample the audio in versions of Windows earlier than Windows?8, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Seeks to a new position in the media source.
-A
Value | Meaning |
---|---|
| 100-nanosecond units. |
?
Some media sources might support additional values.
The position from which playback will be started. The units are specified by the guidTimeFormat parameter. If the guidTimeFormat parameter is GUID_NULL, set the variant type to VT_I8.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more sample requests are still pending. |
?
The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Reads the next sample from the media source.
-The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
Flushes one or more streams.
-The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries the underlying media source or decoder for an interface.
-The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
A reference to a
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Pointer to the
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Windows Phone 8.1: This API is supported.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets a format that is supported natively by the media source.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Selects or deselects one or more streams.
-The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Seeks to a new position in the media source.
-The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets the current media type for a stream.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Reads the next sample from the media source.
-The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is MF_E_INVALIDREQUEST. See
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
In asynchronous mode:
[out]
parameters must be In synchronous mode:
In synchronous mode, if the dwStreamIndex parameter is
This method can return flags in the pdwStreamFlags parameter without returning a media sample in ppSample. Therefore, the ppSample parameter can receive a
If there is a gap in the stream, pdwStreamFlags receives the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Flushes one or more streams.
-The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
Note??In Windows?7, there was a bug in the implementation of this method, which causes OnFlush to be called before the flush operation completes. A hotfix is available that fixes this bug. For more information, see http://support.microsoft.com/kb/979567.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Queries the underlying media source or decoder for an interface.
-The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Callback interface for the Microsoft Media Foundation source reader.
-Use the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
If you do not specify a callback reference, the source reader operates synchronously.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
The pSample parameter might be
If there is a gap in the stream, dwStreamFlags contains the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the source reader receives certain events from the media source.
-For stream events, the value is the zero-based index of the stream that sent the event. For source events, the value is
A reference to the
Returns an
In the current implementation, the source reader uses this method to forward the following events to the application:
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Extends the
The Source Reader implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Source Reader.
-Sets the native media type for a stream on the media source.
-A reference to the
Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
All effects were removed from the stream. | |
The current output type changed. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method sets the output type that is produced by the media source. Unlike the
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
This method can trigger a change in the output format for the stream. If so, the
This method is useful with audio and video capture devices, because a device might support several output formats. This method enables the application to choose the device format before decoders and other transforms are added.
-Adds a transform, such as an audio or video effect, to a stream.
-The stream to configure. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The transform does not support the current stream format, and no conversion was possible. See Remarks for more information. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method attempts to add the transform at the end of the current processing chain.
To use this method, make the following sequence of calls:
The AddTransformForStream method will not insert a decoder into the processing chain. If the native stream format is encoded, and the transform requires an uncompressed format, call SetCurrentMediaType to set the uncompressed format (step 1 in the previous list). However, the method will insert a video processor to convert between RGB and YUV formats, if required.
The method fails if the source reader was configured with the
In asynchronous mode, the method also fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
You can add a transform at any time during streaming. However, the method does not flush or drain the pipeline before inserting the transform. Therefore, if data is already in the pipeline, the next sample is not guaranteed to have the transform applied.
-Removes all of the Media Foundation transforms (MFTs) for a specified stream, with the exception of the decoder.
-The stream for which to remove the MFTs. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
Calling this method can reset the current output type for the stream. To get the new output type, call
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
Gets a reference to a Media Foundation transform (MFT) for a specified stream.
-The stream to query for the MFT. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the MFT to retreive.
Receives a
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwTransformIndex parameter is out of range. |
| The dwStreamIndex parameter is invalid. |
?
You can use this method to configure an MFT after it is inserted into the processing chain. Do not use the reference returned in ppTransform to set media types on the MFT or to process data. In particular, calling any of the following
If a decoder is present, it appears at index position zero.
To avoid losing any data, you should drain the source reader before calling this method. For more information, see Draining the Data Pipeline.
-Creates a media source from a URL or a byte stream. The Source Resolver implements this interface. To create the source resolver, call
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags. See remarks below.
Pointer to the
Receives a member of the
Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
It is recommended that you do not set
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| This byte stream is not supported. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Begins an asynchronous request to create a media source or a byte stream from a URL.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
When the operation completes, the source resolver calls the
The usage of the pProps parameter depends on the implementation of the media source.
-Completes an asynchronous request to create an object from a URL.
- Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside your application's
Begins an asynchronous request to create a media source from a byte stream.
-A reference to the byte stream's
A null-terminated string that contains the original URL of the byte stream. This parameter can be
A bitwise OR of one or more flags. See Source Resolver Flags.
A reference to the
Receives an
A reference to the
A oointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The byte stream is not supported. |
| The byte stream does not support seeking. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
When the operation completes, the source resolver calls the
Completes an asynchronous request to create a media source from a byte stream.
-Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The application canceled the operation. |
?
Call this method from inside your application's
Cancels an asynchronous request to create an object.
- Pointer to the
If this method succeeds, it returns
You can use this method to cancel a previous call to BeginCreateObjectFromByteStream or BeginCreateObjectFromURL. Because these methods are asynchronous, however, they might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
Note??This method cannot be called remotely.? -Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Implemented by a client and called by Microsoft Media Foundation to get the client Secure Sockets Layer (SSL) certificate requested by the server.
In most HTTPS connections the server provides a certificate so that the client can ensure the identity of the server. However, in certain cases the server might wants to verify the identity of the client by requesting the client to send a certificate. For this scenario, a client application must provide a mechanism for Media Foundation to retrieve the client side certificate while opening an HTTPS URL with the source resolver or the scheme handler. The application must implement
If the
Gets the client SSL certificate synchronously.
-Pointer to a string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
Pointer to the buffer that stores the certificate.This caller must free the buffer by calling CoTaskMemFree.
Pointer to a DWORD variable that receives the number of bytes required to hold the certificate data in the buffer pointed by *ppbData.
If this method succeeds, it returns
Starts an asynchronous call to get the client SSL certificate.
-A null-terminated string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
A reference to the
A reference to the
If this method succeeds, it returns
When the operation completes, the callback object's
Completes an asynchronous request to get the client SSL certificate.
-A reference to the
Receives a reference to the buffer that stores the certificate.The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer, in bytes.
If this method succeeds, it returns
Call this method after the
Indicates whether the server SSL certificate must be verified by the caller, Media Foundation, or the
Pointer to a string that contains the URL that is sent to the server.
Pointer to a
Pointer to a
If this method succeeds, it returns
Called by Media Foundation when the server SSL certificate has been received; indicates whether the server certificate is accepted.
-Pointer to a string that contains the URL used to send the request to the server, and for which a server-side SSL certificate has been received.
Pointer to a buffer that contains the server SSL certificate.
Pointer to a DWORD variable that indicates the size of pbData in bytes.
Pointer to a
If this method succeeds, it returns
Gets information about one stream in a media source.
-A presentation descriptor contains one or more stream descriptors. To get the stream descriptors from a presentation descriptor, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
-The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
-Receives the stream identifier.
If this method succeeds, it returns
The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
-Receives a reference to the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Passes configuration information to the media sinks that are used for streaming the content. Optionally, this interface is supported by media sinks. The built-in ASF streaming media sink and the MP3 media sink implement this interface.
-Called by the streaming media client before the Media Session starts streaming to specify the byte offset or the time offset.
-A Boolean value that specifies whether qwSeekOffset gives a byte offset of a time offset.
Value | Meaning |
---|---|
| The qwSeekOffset parameter specifies a byte offset. |
The qwSeekOffset parameter specifies the time position in 100-nanosecond units. |
?
A byte offset or a time offset, depending on the value passed in fSeekOffsetIsByteOffset. Time offsets are specified in 100-nanosecond units.
If this method succeeds, it returns
Represents a stream on a media sink object.
-
Retrieves the media sink that owns this stream sink.
-
Retrieves the stream identifier for this stream sink.
-
Retrieves the media sink that owns this stream sink.
-Receives a reference to the media sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Retrieves the stream identifier for this stream sink.
-Receives the stream identifier. If this stream sink was added by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Delivers a sample to the stream. The media sink processes the sample.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink is in the wrong state to receive a sample. For example, preroll is complete but the presenation clock has not started yet. |
| The sample has an invalid time stamp. See Remarks. |
| The media sink is paused or stopped and cannot process the sample. |
| The presentation clock was not set. Call |
| The sample does not have a time stamp. |
| The stream sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Call this method when the stream sink sends an
This method can return
Negative time stamps.
Time stamps that jump backward (within the same stream).
The time stamps for one stream have drifted too far from the time stamps on another stream within the same media sink (for example, an archive sink that multiplexes the streams).
Not every media sink returns an error code in these situations.
-
Places a marker in the stream.
- Specifies the marker type, as a member of the
Optional reference to a
Optional reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
This method causes the stream sink to send an
Causes the stream sink to drop any samples that it has received and has not rendered yet.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream sink has not been initialized yet. You might need to set a media type. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
If any samples are still queued from previous calls to the
Any pending marker events from the
This method is synchronous. It does not return until the sink has discarded all pending samples.
-Provides a method that retireves system id data.
-Retrieves system id data.
-The size in bytes of the returned data.
Receives the returned data. The caller must free this buffer by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets up the
If this method succeeds, it returns
Converts between Society of Motion Picture and Television Engineers (SMPTE) time codes and 100-nanosecond time units.
-If an object supports this interface, it must expose the interface as a service. To get a reference to the interface, call
The Advanced Streaming Format (ASF) media source exposes this interface.
-Starts an asynchronous call to convert Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
-Time in SMPTE time code to convert. The vt member of the
Pointer to the
PPointer to the
The method returns an
Return code | Description |
---|---|
| pPropVarTimecode is not VT_I8. |
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
void CreateTimeCode( DWORD dwFrames, DWORD dwSeconds, DWORD dwMinutes, DWORD dwHours, DWORD dwRange,-*pvar ) - { ULONGLONG ullTimecode = ((ULONGLONG)dwRange) << 32; ullTimecode += dwFrames % 10; ullTimecode += (( (ULONGLONG)dwFrames ) / 10) << 4; ullTimecode += (( (ULONGLONG)dwSeconds ) % 10) << 8; ullTimecode += (( (ULONGLONG)dwSeconds ) / 10) << 12; ullTimecode += (( (ULONGLONG)dwMinutes ) % 10) << 16; ullTimecode += (( (ULONGLONG)dwMinutes ) / 10) << 20; ullTimecode += (( (ULONGLONG)dwHours ) % 10) << 24; ullTimecode += (( (ULONGLONG)dwHours ) / 10) << 28; pvar->vt = VT_I8; pvar->hVal.QuadPart = (LONGLONG)ullTimecode; - } -
Completes an asynchronous request to convert time in Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
-Pointer to the
Receives the converted time.
If this method succeeds, it returns
Call this method after the
Starts an asynchronous call to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
-The time to convert, in 100-nanosecond units.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
Completes an asynchronous request to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
-A reference to the
A reference to a
If this method succeeds, it returns
Call this method after the
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
-ParseTimeCode( const & var, DWORD *pdwRange, DWORD *pdwFrames, DWORD *pdwSeconds, DWORD *pdwMinutes, DWORD *pdwHours ) - { if (var.vt != VT_I8) { return E_INVALIDARG; } ULONGLONG ullTimeCode = (ULONGLONG)var.hVal.QuadPart; DWORD dwTimecode = (DWORD)(ullTimeCode & 0xFFFFFFFF); *pdwRange = (DWORD)(ullTimeCode >> 32); *pdwFrames = dwTimecode & 0x0000000F; *pdwFrames += (( dwTimecode & 0x000000F0) >> 4 ) * 10; *pdwSeconds = ( dwTimecode & 0x00000F00) >> 8; *pdwSeconds += (( dwTimecode & 0x0000F000) >> 12 ) * 10; *pdwMinutes = ( dwTimecode & 0x000F0000) >> 16; *pdwMinutes += (( dwTimecode & 0x00F00000) >> 20 ) * 10; *pdwHours = ( dwTimecode & 0x0F000000) >> 24; *pdwHours += (( dwTimecode & 0xF0000000) >> 28 ) * 10; return ; - } -
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A timed-text object represents a component of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Registers a timed-text notify object.
-A reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Selects or deselects a track of text in the timed-text component.
-The identifier of the track to select.
Specifies whether to select or deselect a track of text. Specify TRUE to select the track or
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source.
-A reference to the
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source from the specified URL.
-The URL of the timed-text data source.
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Removes the timed-text track with the specified identifier.
-The identifier of the track to remove.
If this method succeeds, it returns
Get the identifier for a track by calling GetId.
When a track is removed, all buffered data from the track is also removed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
-A reference to a variable that receives the offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the offset to the cue time.
-The offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
- Specifies whether inband mode is enabled. If TRUE, inband mode is enabled. If
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
-Returns whether inband mode is enabled. If TRUE, inband mode is enabled. If
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the data content of a timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text object.
-A reference to a memory block that receives a reference to the data content of the timed-text object.
A reference to a variable that receives the length in bytes of the data content.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
-The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
-The identifier of a timed-text cue.
The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the cue identifier that is provided in the text-track data format, if available.
-The cue identifier that is provided in the text-track data format.
If this method succeeds, it returns
This method retrieves an identifier for the cue that is included in the source data, if one was specified. The system dynamically generates identifiers for cues that are guaranteed to be unique within a single time-text track. To obtain this system-generated ID, call GetId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
-Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
-Returns the start time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
-Returns the duration time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
-Returns the identifier of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
-Returns the number of lines of text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a line of text in the cue from the index of the line.
-The index of the line of text in the cue to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a block of formatted timed-text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text in the formatted timed-text object.
-A reference to a variable that receives the null-terminated wide-character string that contains the text.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
-Returns the number of subformats.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a subformat in the formatted timed-text object.
-The index of the subformat in the formatted timed-text object.
A reference to a variable that receives the first character of the subformat.
A reference to a variable that receives the length, in characters, of the subformat.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Interface that defines callbacks for Microsoft Media Foundation Timed Text notifications.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is added
-The identifier of the track that was added.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is removed.
-The identifier of the track that was removed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a track is selected or deselected.
-The identifier of the track that was selected or deselected.
TRUE if the track was selected.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an error occurs in a text track.
-An
The extended error code for the last error.
The identifier of the track on which the error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a cue event occurs in a text track.
-A value specifying the type of event that has occured.
The current time when the cue event occurred.
The
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Resets the timed-text-notify object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the display region of a timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the region.
-A reference to a variable that receives the null-terminated wide-character string that contains the name of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the position of the region.
-A reference to a variable that receives the X-coordinate of the position.
A reference to a variable that receives the Y-coordinate of the position.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extent of the region.
-A reference to a variable that receives the width of the region.
A reference to a variable that receives the height of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the height of each line of text in the region.
-A reference to a variable that receives the height of each line of text in the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
-A reference to a variable that receives a value that specifies whether a clip of text overflowed the region. The variable specifies TRUE if the clip overflowed; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the padding that surrounds the region.
-A reference to a variable that receives the padding before the start of the region.
A reference to a variable that receives the start of the region.
A reference to a variable that receives the padding after the end of the region.
A reference to a variable that receives the end of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
-A reference to a variable that receives a value that specifies whether the word wrap feature is enabled in the region. The variable specifies TRUE if word wrap is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
-A reference to a variable that receives the Z-index (depth) of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the timed-text style.
-A reference to a variable that receives the null-terminated wide-character string that contains the name of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
-Returns whether the timed-text style is external. If TRUE, the timed-text style is external; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font family of the timed-text style.
-A reference to a variable that receives the null-terminated wide-character string that contains the font family of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font size of the timed-text style.
-A reference to a variable that receives the font size of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
-A reference to a variable that receives a value that specifies whether the style of timed text always shows the background. The variable specifies TRUE if the background is always shown; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
-A reference to a variable that receives a value that specifies whether the style of timed text is bold. The variable specifies TRUE if the style is bold; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
-A reference to a variable that receives a value that specifies whether the right to left writing mode is enabled. The variable specifies TRUE if the right to left writing mode is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
-A reference to a variable that receives a combination of
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text outline for the timed-text style.
-A reference to a variable that receives a
A reference to a variable that receives the thickness.
A reference to a variable that receives the blur radius.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a track of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
-If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
-Returns the identifier of the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the label of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
-A reference to a null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the language of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the language of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
-Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
-Returns whether the timed-text track is inband. If TRUE, the timed-text track is inband; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the in-band metadata of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the in-band metadata of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
-Returns whether the timed-text track is active. If TRUE, the timed-text track is active; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
-A value indicating the error type of the latest error associated with the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
-The extended error code for the latest error associated with the track.
If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
A
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a list of timed-text tracks.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
-Returns the length, in tracks, of the timed-text-track list.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the index of the track.
-The index of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the identifier of the track.
-The identifier of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
Provides a timer that invokes a callback at a specified time.
-The presentation clock exposes this interface. To get a reference to the interface, call QueryInterface.
-
Sets a timer that invokes a callback at the specified time.
-Bitwise OR of zero or more flags from the
The time at which the timer should fire, in units of the clock's frequency. The time is either absolute or relative to the current time, depending on the value of dwFlags.
Pointer to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock was shut down. |
| The method succeeded, but the clock is stopped. |
?
If the clock is stopped, the method returns MF_S_CLOCK_STOPPED. The callback will not be invoked until the clock is started.
-
Cancels a timer that was set using the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Because the timer is dispatched asynchronously, the application's timer callback might get invoked even if this method succeeds.
-Creates a fully loaded topology from the input partial topology.
-This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Creates a fully loaded topology from the input partial topology.
-A reference to the
Receives a reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more output nodes contain |
?
This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Represents a topology. A topology describes a collection of media sources, sinks, and transforms that are connected in a certain order. These objects are represented within the topology by topology nodes, which expose the
To create a topology, call
Gets the identifier of the topology.
-Gets the number of nodes in the topology.
-Gets the source nodes in the topology.
-Gets the output nodes in the topology.
-Gets the identifier of the topology.
-Receives the identifier, as a TOPOID value.
If this method succeeds, it returns
Adds a node to the topology.
-Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pNode is invalid, possibly because the node already exists in the topology. |
?
Removes a node from the topology.
-Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified node is not a member of this topology. |
?
This method does not destroy the node, so the
The method breaks any connections between the specified node and other nodes.
-Gets the number of nodes in the topology.
-Receives the number of nodes.
If this method succeeds, it returns
Gets a node in the topology, specified by index.
- The zero-based index of the node. To get the number of nodes in the topology, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is less than zero. |
| No node can be found at the index wIndex. |
?
Removes all nodes from the topology.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You do not need to clear a topology before disposing of it. The Clear method is called automatically when the topology is destroyed.
-Converts this topology into a copy of another topology.
- A reference to the
If this method succeeds, it returns
This method does the following:
Gets a node in the topology, specified by node identifier.
- The identifier of the node to retrieve. To get a node's identifier, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The topology does not contain a node with this identifier. |
?
Gets the source nodes in the topology.
-Receives a reference to the
If this method succeeds, it returns
Gets the output nodes in the topology.
- Receives a reference to the
If this method succeeds, it returns
Represents a node in a topology. The following node types are supported:
To create a new node, call the
Sets the object associated with this node.
-All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
-
Retrieves the node type.
-Retrieves or sets the identifier of the node.
-When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Retrieves the number of input streams that currently exist on this node.
-The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
-The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Sets the object associated with this node.
-A reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
- Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this node. |
?
Retrieves the node type.
-Receives the node type, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the identifier of the node.
-Receives the identifier.
If this method succeeds, it returns
When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Sets the identifier for the node.
-The identifier for the node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The TOPOID has already been set for this object. |
?
When a node is first created, it is assigned an identifier. Typically there is no reason for an application to override the identifier. Within a topology, each node identifier should be unique.
-
Retrieves the number of input streams that currently exist on this node.
-Receives the number of input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
-Receives the number of output streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Connects an output stream from this node to the input stream of another node.
-Zero-based index of the output stream on this node.
Pointer to the
Zero-based index of the input stream on the other node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method failed. |
| Invalid parameter. |
?
Node connections represent data flow from one node to the next. The streams are logical, and are specified by index.
If the node is already connected at the specified output, the method breaks the existing connection. If dwOutputIndex or dwInputIndexOnDownstreamNode specify streams that do not exist yet, the method adds as many streams as needed.
This method checks for certain invalid conditions:
An output node cannot have any output connections. If you call this method on an output node, the method returns E_FAIL.
A node cannot be connected to itself. If pDownstreamNode specifies the same node as the method call, the method returns E_INVALIDARG.
However, if the method succeeds, it does not guarantee that the node connection is valid. It is possible to create a partial topology that the topology loader cannot resolve. If so, the
To break an existing node connection, call
Disconnects an output stream on this node.
-Zero-based index of the output stream to disconnect.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwOutputIndex parameter is out of range. |
| The specified output stream is not connected to another node. |
?
If the specified output stream is connected to another node, this method breaks the connection.
-
Retrieves the node that is connected to a specified input stream on this node.
-Zero-based index of an input stream on this node.
Receives a reference to the
Receives the index of the output stream that is connected to this node's input stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Retrieves the node that is connected to a specified output stream on this node.
-Zero-based index of an output stream on this node.
Receives a reference to the
Receives the index of the input stream that is connected to this node's output stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Sets the preferred media type for an output stream on this node.
-Zero-based index of the output stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is an output node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no output stream exists at the specified index, the method creates new streams up to and including the specified index number.
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL
-
Retrieves the preferred media type for an output stream on this node.
-Zero-based index of the output stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred output type. |
| Invalid stream index. |
| This node is an output node. |
?
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL.
The preferred output type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred output type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Sets the preferred media type for an input stream on this node.
-Zero-based index of the input stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is a source node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no input stream exists at the specified index, the method creates new streams up to and including the specified index number.
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
-
Retrieves the preferred media type for an input stream on this node.
-Zero-based index of the input stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred input type. |
| Invalid stream index. |
| This node is a source node. |
?
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
The preferred input type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred input type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Copies the data from another topology node into this node.
- A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The node types do not match. |
?
The two nodes must have the same node type. To get the node type, call
This method copies the object reference, preferred types, and attributes from pNode to this node. It also copies the TOPOID that uniquely identifies each node in a topology. It does not duplicate any of the connections from pNode to other nodes.
The purpose of this method is to copy nodes from one topology to another. Do not use duplicate nodes within the same topology.
-Updates the attributes of one or more nodes in the Media Session's current topology.
The Media Session exposes this interface as a service. To get a reference to the interface, call
Currently the only attribute that can be updated is the
Updates the attributes of one or more nodes in the current topology.
-Reserved.
The number of elements in the pUpdates array.
Pointer to an array of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Currently the only attribute that can be updated is the
Enables a custom video mixer or video presenter to get interface references from the Enhanced Video Renderer (EVR). The mixer can also use this interface to get interface references from the presenter, and the presenter can use it to get interface references from the mixer.
To use this interface, implement the
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
-Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The requested interface is not available. |
| The method was not called from inside the |
| The object does not support the specified service |
?
This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Initializes a video mixer or presenter. This interface is implemented by mixers and presenters, and enables them to query the enhanced video renderer (EVR) for interface references.
-When the EVR loads the video mixer and the video presenter, the EVR queries the object for this interface and calls InitServicePointers. Inside the InitServicePointers method, the object can query the EVR for interface references.
-
Signals the mixer or presenter to query the enhanced video renderer (EVR) for interface references.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The
When the EVR calls
Signals the object to release the interface references obtained from the enhanced video renderer (EVR).
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After this method is called, any interface references obtained during the previous call to
Tracks the reference counts on a video media sample. Video samples created by the
Use this interface to determine whether it is safe to delete or re-use the buffer contained in a sample. One object assigns itself as the owner of the video sample by calling SetAllocator. When all objects release their reference counts on the sample, the owner's callback method is invoked.
-
Sets the owner for the sample.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The owner was already set. This method cannot be called twice on the sample. |
?
When this method is called, the sample holds an additional reference count on itself. When every other object releases its reference counts on the sample, the sample invokes the pSampleAllocator callback method. To get a reference to the sample, call
After the callback is invoked, the sample clears the callback. To reinstate the callback, you must call SetAllocator again.
It is safe to pass in the sample's
Implemented by the transcode profile object.
The transcode profile stores configuration settings that the topology builder uses to generate the transcode topology for the output file. These configuration settings are specified by the caller and include audio and video stream properties, encoder settings, and container settings that are specified by the caller.
To create the transcode profile object, call
Gets or sets the audio stream settings that are currently set in the transcode profile.
-If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the video stream settings that are currently set in the transcode profile.
-If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the container settings that are currently set in the transcode profile.
-If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets audio stream configuration settings in the transcode profile.
To get a list of compatible audio media types supported by the Media Foundation transform (MFT) encoder , call
If this method succeeds, it returns
Gets the audio stream settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets video stream configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the video stream settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets container configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the container settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets the name of the encoded output file.
-The media sink will create a local file with the specified file name.
Alternately, you can call
Sets the name of the encoded output file.
-The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
-Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
-Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
-Before calling this method, call
Sets the name of the encoded output file.
-Pointer to a null-terminated string that contains the name of the output file.
If this method succeeds, it returns
The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
-A reference to the
If this method succeeds, it returns
Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
-A reference to the
If this method succeeds, it returns
Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
-A reference to an
If the method succeeds, the method assigns
If this method succeeds, it returns
Before calling this method, call
Implemented by all Media Foundation Transforms (MFTs).
-Gets the global attribute store for this Media Foundation transform (MFT).
- Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
-Queries whether the Media Foundation transform (MFT) is ready to produce output data.
- If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
-Gets the minimum and maximum number of input and output streams for this Media Foundation transform (MFT).
-Receives the minimum number of input streams.
Receives the maximum number of input streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
Receives the minimum number of output streams.
Receives the maximum number of output streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
If this method succeeds, it returns
If the MFT has a fixed number of streams, the minimum and maximum values are the same.
It is not recommended to create an MFT that supports zero inputs or zero outputs. An MFT with no inputs or no outputs may not be compatible with the rest of the Media Foundation pipeline. You should create a Media Foundation sink or source for this purpose instead.
When an MFT is first created, it is not guaranteed to have the minimum number of streams. To find the actual number of streams, call
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamLimits. See Creating Hybrid DMO/MFT Objects.
-Gets the current number of input and output streams on this Media Foundation transform (MFT).
-Receives the number of input streams.
Receives the number of output streams.
If this method succeeds, it returns
The number of streams includes unselected streams?that is, streams with no media type or a
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamCount. See Creating Hybrid DMO/MFT Objects.
-Gets the stream identifiers for the input and output streams on this Media Foundation transform (MFT).
-Number of elements in the pdwInputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the input stream identifiers. The array size must be at least equal to the number of input streams. To get the number of input streams, call
If the caller passes an array that is larger than the number of input streams, the MFT must not write values into the extra array entries.
Number of elements in the pdwOutputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the output stream identifiers. The array size must be at least equal to the number of output streams. To get the number of output streams, call GetStreamCount.
If the caller passes an array that is larger than the number of output streams, the MFT must not write values into the extra array entries.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. See Remarks. |
| One or both of the arrays is too small. |
?
Stream identifiers are necessary because some MFTs can add or remove streams, so the index of a stream may not be unique. Therefore,
This method can return E_NOTIMPL if both of the following conditions are true:
This method must be implemented if any of the following conditions is true:
All input stream identifiers must be unique within an MFT, and all output stream identifiers must be unique. However, an input stream and an output stream can share the same identifier.
If the client adds an input stream, the client assigns the identifier, so the MFT must allow arbitrary identifiers, as long as they are unique. If the MFT creates an output stream, the MFT assigns the identifier.
By convention, if an MFT has exactly one fixed input stream and one fixed output stream, it should assign the identifier 0 to both streams.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamIDs. See Creating Hybrid DMO/MFT Objects.
-Gets the buffer requirements and other information for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStreamInfo. See Creating Hybrid DMO/MFT Objects.
-Gets the buffer requirements and other information for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStreamInfo. See Creating Hybrid DMO/MFT Objects.
-Gets the global attribute store for this Media Foundation transform (MFT).
- Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support attributes. |
?
Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
-Gets the attribute store for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support input stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Gets the attribute store for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support output stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Removes an input stream from this Media Foundation transform (MFT).
-Identifier of the input stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform has a fixed number of input streams. |
| The stream is not removable, or the transform currently has the minimum number of input streams it can support. |
| Invalid stream identifier. |
| The transform has unprocessed input buffers for the specified stream. |
?
If the transform has a fixed number of input streams, the method returns E_NOTIMPL.
An MFT might support this method but not allow certain input streams to be removed. If an input stream can be removed, the
If the transform still has unprocessed input for that stream, the method might succeed or it might return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTDeleteInputStream. See Creating Hybrid DMO/MFT Objects.
-Adds one or more new input streams to this Media Foundation transform (MFT).
-Number of streams to add.
Array of stream identifiers. The new stream identifiers must not match any existing input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The MFT has a fixed number of input streams. |
?
If the new streams exceed the maximum number of input streams for this transform, the method returns E_INVALIDARG. To find the maximum number of input streams, call
If any of the new stream identifiers conflicts with an existing input stream, the method returns E_INVALIDARG.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTAddInputStreams. See Creating Hybrid DMO/MFT Objects.
-Gets an available media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available input types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the output types before setting the input types. |
?
The MFT defines a list of available media types for each input stream and orders them by preference. This method enumerates the available media types for an input stream. To enumerate the available types, increment dwTypeIndex until the method returns
Setting the media type on one stream might change the available types for another stream, or change the preference order. However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of input types until one or more output types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputAvailableType. See Creating Hybrid DMO/MFT Objects.
For encoders, after the output type is set, GetInputAvailableType must return a list of input types that are compatible with the current output type. This means that all types returned by GetInputAvailableType after the output type is set must be valid types for SetInputType.
Encoders should reject input types if the attributes of the input media type and output media type do not match, such as resolution setting with
Gets an available media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available output types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the input types before setting the output types. |
?
The MFT defines a list of available media types for each output stream and orders them by preference. This method enumerates the available media types for an output stream. To enumerate the available types, increment dwTypeIndex until the method returns MF_E_NO_MORE_TYPES.
Setting the media type on one stream can change the available types for another stream (or change the preference order). However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of output types until one or more input types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
This method can return a partial media type. A partial media type contains an incomplete description of a format, and is used to provide a hint to the caller. For example, a partial type might include just the major type and subtype GUIDs. However, after the client sets the input types on the MFT, the MFT should generally return at least one complete output type, which can be used without further modification. For more information, see Complete and Partial Media Types.
Some MFTs cannot provide an accurate list of output types until the MFT receives the first input sample. For example, the MFT might need to read the first packet header to deduce the format. An MFT should handle this situation as follows:
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputAvailableType. See Creating Hybrid DMO/MFT Objects.
-Sets, tests, or clears the media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory attributes. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the output types before setting the input types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more output types before setting the input type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetInputType. See Creating Hybrid DMO/MFT Objects.
-Sets, tests, or clears the media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory flags. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the input types before setting the output types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more input types before setting the output type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputType. See Creating Hybrid DMO/MFT Objects.
-Gets the current media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The input media type has not been set. |
?
If the specified input stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputCurrentType. See Creating Hybrid DMO/MFT Objects.
-Gets the current media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The output media type has not been set. |
?
If the specified output stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputCurrentType. See Creating Hybrid DMO/MFT Objects.
-Queries whether an input stream on this Media Foundation transform (MFT) can accept more data.
- Input stream identifier. To get the list of stream identifiers, call
Receives a member of the _MFT_INPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The media type is not set on one or more streams. |
?
If the method returns the
Use this method to test whether the input stream is ready to accept more data, without incurring the overhead of allocating a new sample and calling ProcessInput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output (or both).
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStatus. See Creating Hybrid DMO/MFT Objects.
-Queries whether the Media Foundation transform (MFT) is ready to produce output data.
- Receives a member of the _MFT_OUTPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
-Sets the range of time stamps the client needs for output.
-Specifies the earliest time stamp. The Media Foundation transform (MFT) will accept input until it can produce an output sample that begins at this time; or until it can produce a sample that ends at this time or later. If there is no lower bound, use the value MFT_OUTPUT_BOUND_LOWER_UNBOUNDED.
Specifies the latest time stamp. The MFT will not produce an output sample with time stamps later than this time. If there is no upper bound, use the value MFT_OUTPUT_BOUND_UPPER_UNBOUNDED.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
This method can be used to optimize preroll, especially in formats that have gaps between time stamps, or formats where the data must start on a sync point, such as MPEG-2. Calling this method is optional, and implementation of this method by an MFT is optional. If the MFT does not implement the method, the return value is E_NOTIMPL.
If an MFT implements this method, it must limit its output data to the range of times specified by hnsLowerBound and hnsUpperBound. The MFT discards any input data that is not needed to produce output within this range. If the sample boundaries do not exactly match the range, the MFT should split the output samples, if possible. Otherwise, the output samples can overlap the range.
For example, suppose the output range is 100 to 150 milliseconds (ms), and the output format is video with each frame lasting 33 ms. A sample with a time stamp of 67 ms overlaps the range (67 + 33 = 100) and is produced as output. A sample with a time stamp of 66 ms is discarded (66 + 33 = 99). Similarly, a sample with a time stamp of 150 ms is produced as output, but a sample with a time stamp of 151 is discarded.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputBounds. See Creating Hybrid DMO/MFT Objects.
-Sends an event to an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| Invalid stream number. |
| The media type is not set on one or more streams. |
| The pipeline should not propagate the event. |
?
An MFT can handle sending the event downstream, or it can let the pipeline do this, as indicated by the return value:
To send the event downstream, the MFT adds the event to the collection object that is provided by the client in the pEvents member of the
Events must be serialized with the samples that come before and after them. Attach the event to the output sample that follows the event. (The pipeline will process the event first, and then the sample.) If an MFT holds back one or more samples between calls to
If an MFT does not hold back samples and does not need to examine any events, it can return E_NOTIMPL.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessEvent. See Creating Hybrid DMO/MFT Objects.
-Sends a message to the Media Foundation transform (MFT).
- The message to send, specified as a member of the
Message parameter. The meaning of this parameter depends on the message type.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. Applies to the |
| The media type is not set on one or more streams. |
?
Before calling this method, set the media types on all input and output streams.
The MFT might ignore certain message types. If so, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessMessage. See Creating Hybrid DMO/MFT Objects.
-Delivers data to an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Reserved. Must be zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid stream identifier. |
| The input sample requires a valid sample duration. To set the duration, call Some MFTs require that input samples have valid durations. Some MFTs do not require sample durations. |
| The input sample requires a time stamp. To set the time stamp, call Some MFTs require that input samples have valid time stamps. Some MFTs do not require time stamps. |
| The transform cannot process more input at this time. |
| The media type is not set on one or more streams. |
| The media type is not supported for DirectX Video Acceleration (DXVA). A DXVA-enabled decoder might return this error code. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for In most cases, if the method succeeds, the MFT stores the sample and holds a reference count on the
If the MFT already has enough input data to produce an output sample, it does not accept new input data, and ProcessInput returns
An exception to this rule is the
An MFT can process the input data in the ProcessInput method. However, most MFTs wait until the client calls ProcessOutput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output. It should never be in both states or neither state. An MFT should only accept as much input as it needs to generate at least one output sample, at which point ProcessInput returns
If an MFT encounters a non-fatal error in the input data, it can simply drop the data and attempt to recover when it gets the more input data. To request more input data, the MFT returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessInput. See Creating Hybrid DMO/MFT Objects.
-Generates output from the current input data.
-Bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_FLAGS enumeration.
Number of elements in the pOutputSamples array. The value must be at least 1.
Pointer to an array of
Receives a bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_STATUS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ProcessOutput method was called on an asynchronous MFT that was not expecting this method call. |
| Invalid stream identifier in the dwStreamID member of one or more |
| The transform cannot produce output data until it receives more input data. |
| The format has changed on an output stream, or there is a new preferred format, or there is a new output stream. |
| You must set the media type on one or more streams of the MFT. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for The size of the pOutputSamples array must be equal to or greater than the number of selected output streams. The number of selected output streams equals the total number of output streams minus the number of deselected streams. A stream is deselected if it has the
This method generates output samples and can also generate events. If the method succeeds, at least one of the following conditions is true:
If MFT_UNIQUE_METHOD_NAMES is defined before including Mftransform.h, this method is renamed MFTProcessOutput. See Creating Hybrid DMO/MFT Objects.
-Implemented by components that provide input trust authorities (ITAs). This interface is used to get the ITA for each of the component's streams.
-
Retrieves the input trust authority (ITA) for a specified stream.
-The stream identifier for which the ITA is being requested.
The interface identifier (IID) of the interface being requested. Currently the only supported value is IID_IMFInputTrustAuthority.
Receives a reference to the ITA's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ITA does not expose the requested interface. |
?
Implemented by components that provide output trust authorities (OTAs). Any Media Foundation transform (MFT) or media sink that is designed to work within the protected media path (PMP) and also sends protected content outside the Media Foundation pipeline must implement this interface.
The policy engine uses this interface to negotiate what type of content protection should be applied to the content. Applications do not use this interface directly.
-If an MFT supports
Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
-
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
-A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
-Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
-Receives the number of OTAs.
If this method succeeds, it returns
Gets an output trust authority (OTA), specified by index.
- Zero-based index of the OTA to retrieve. To get the number of OTAs provided by this object, call
Receives a reference to the
If this method succeeds, it returns
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
-Receives a Boolean value. If TRUE, this object is a policy sink. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
-Limits the effective video resolution.
-This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Limits the effective video resolution.
-This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Queries whether the plug-in has any transient vulnerabilities at this time.
-Receives a Boolean value. If TRUE, the plug-in has no transient vulnerabilities at the moment and can receive protected content. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method provides a way for the plug-in to report temporary conditions that would cause the input trust authority (ITA) to distrust the plug-in. For example, if an EVR presenter is in windowed mode, it is vulnerable to GDI screen captures.
To disable screen capture in Direct3D, the plug-in must do the following:
Create the Direct3D device in full-screen exlusive mode.
Specify the D3DCREATE_DISABLE_PRINTSCREEN flag when you create the device. For more information, see IDirect3D9::CreateDevice in the DirectX documentation.
In addition, the graphics adapter must support the Windows Vista Display Driver Model (WDDM) and the Direct3D extensions for Windows Vista (sometimes called D3D9Ex or D3D9L).
If these conditions are met, the presenter can return TRUE in the pYes parameter. Otherwise, it should return
The EVR calls this method whenever the device changes. If the plug-in returns
This method should be used only to report transient conditions. A plug-in that is never in a trusted state should not implement the
Queries whether the plug-in can limit the effective video resolution.
-Receives a Boolean value. If TRUE, the plug-in can limit the effective video resolution. Otherwise, the plug-in cannot limit the video resolution. If the method fails, the EVR treats the value as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Constriction is a protection mechanism that limits the effective resolution of the video frame to a specified maximum number of pixels.
Video constriction can be implemented by either the mixer or the presenter.
If the method returns TRUE, the EVR might call
Limits the effective video resolution.
-Maximum number of source pixels that may appear in the final video image, in thousands of pixels. If the value is zero, the video is disabled. If the value is MAXDWORD (0xFFFFFFFF), video constriction is removed and the video may be rendered at full resolution.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Enables or disables the ability of the plug-in to export the video image.
-Boolean value. Specify TRUE to disable image exporting, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An EVR plug-in might expose a way for the application to get a copy of the video frames. For example, the standard EVR presenter implements
If the plug-in supports image exporting, this method enables or disables it. Before this method has been called for the first time, the EVR assumes that the mechanism is enabled.
If the plug-in does not support image exporting, this method should return
While image exporting is disabled, any associated export method, such as GetCurrentImage, should return
Returns the device identifier supported by a video renderer component. This interface is implemented by mixers and presenters for the enhanced video renderer (EVR). If you replace either of these components, the mixer and presenter must report the same device identifier.
-
Returns the identifier of the video device supported by an EVR mixer or presenter.
-If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
-
Returns the identifier of the video device supported by an EVR mixer or presenter.
-Receives the device identifier. Generally, the value is IID_IDirect3DDevice9.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
-Controls how the Enhanced Video Renderer (EVR) displays video.
The EVR presenter implements this interface. To get a reference to the interface, call
If you implement a custom presenter for the EVR, the presenter can optionally expose this interface as a service.
-Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Gets or sets the clipping window for the video.
-There is no default clipping window. The application must set the clipping window.
-Gets or sets the border color for the video.
-The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
-Gets or sets various video rendering settings.
-Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
-Gets the size and aspect ratio of the video, prior to any stretching by the video renderer.
-Receives the size of the native video rectangle. This parameter can be
Receives the aspect ratio of the video. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one of the parameters must be non- |
| The video renderer has been shut down. |
?
If no media types have been set on any video streams, the method succeeds but all parameters are set to zero.
You can set pszVideo or pszARVideo to
Gets the range of sizes that the enhanced video renderer (EVR) can display without significantly degrading performance or image quality.
-Receives the minimum ideal size. This parameter can be
Receives the maximum ideal size. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
You can set pszMin or pszMax to
Sets the source and destination rectangles for the video.
-Pointer to an
Specifies the destination rectangle. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the source and destination rectangles for the video.
-Pointer to an
Receives the current destination rectangle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more required parameters are |
| The video renderer has been shut down. |
?
Specifies how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Receives a bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
Sets the source and destination rectangles for the video.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the clipping window for the video.
-Receives a handle to the window where the enhanced video renderer (EVR) will draw the video.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
There is no default clipping window. The application must set the clipping window.
-
Repaints the current video frame. Call this method whenever the application receives a WM_PAINT message.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR cannot repaint the frame at this time. This error can occur while the EVR is switching between full-screen and windowed mode. The caller can safely ignore this error. |
| The video renderer has been shut down. |
?
Gets a copy of the current image being displayed by the video renderer.
-Pointer to a sizeof(
before calling the method.
Receives a reference to a buffer that contains a packed Windows device-independent bitmap (DIB). The caller must free the memory for the bitmap by calling CoTaskMemFree.
Receives the size of the buffer returned in pDib, in bytes.
Receives the time stamp of the captured image.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The content is protected and the license does not permit capturing the image. |
| The video renderer has been shut down. |
?
This method can be called at any time. However, calling the method too frequently degrades the video playback performance.
This method retrieves a copy of the final composited image, which includes any substreams, alpha-blended bitmap, aspect ratio correction, background color, and so forth.
In windowed mode, the bitmap is the size of the destination rectangle specified in
Sets the border color for the video.
-Specifies the border color as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
By default, if the video window straddles two monitors, the enhanced video renderer (EVR) clips the video to one monitor and draws the border color on the remaining portion of the window. (To change the clipping behavior, call
The border color is not used for letterboxing. To change the letterbox color, call IMFVideoProcessor::SetBackgroundColor.
-Gets the border color for the video.
-Receives the border color, as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
-
Sets various preferences related to video rendering.
-Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Gets various video rendering settings.
-Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
[This API is not supported and may be altered or unavailable in the future. ]
Sets or unsets full-screen rendering mode.
To implement full-screen playback, an application should simply resize the video window to cover the entire area of the monitor. Also set the window to be a topmost window, so that the application receives all mouse-click messages. For more information about topmost windows, see the documentation for the SetWindowPos function.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The default EVR presenter implements full-screen mode using Direct3D exclusive mode.
If you use this method to switch to full-screen mode, set the application window to be a topmost window and resize the window to cover the entire monitor. This ensures that the application window receives all mouse-click messages. Also set the keyboard focus to the application window. When you switch out of full-screen mode, restore the window's original size and position.
By default, the cursor is still visible in full-screen mode. To hide the cursor, call ShowCursor.
The transition to and from full-screen mode occurs asynchronously. To get the current mode, call
Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
-Receives a Boolean value. If TRUE, the EVR is in full-screen mode. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR is currently switching between full-screen and windowed mode. |
?
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
[This API is not supported and may be altered or unavailable in the future. Instead, applications should set the
Retrieves an alternative representation of the media type.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is equivalent to
Instead of calling this method, applications should set the
Controls how the Enhanced Video Renderer (EVR) mixes video substreams. Applications can use this interface to control video mixing during playback.
The EVR mixer implements this interface. To get a reference to the interface, call
If you implement a custom mixer for the EVR, the mixer can optionally expose this interface as a service.
-
Sets the z-order of a video stream.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Z-order value. The z-order of the reference stream must be zero. The maximum z-order value is the number of streams minus one.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value of dwZ is larger than the maximum z-order value. |
| Invalid z-order for this stream. For the reference stream, dwZ must be zero. For all other streams, dwZ must be greater than zero. |
| Invalid stream identifier. |
?
The EVR draws the video streams in the order of their z-order values, starting with zero. The reference stream must be first in the z-order, and the remaining streams can be in any order.
-
Retrieves the z-order of a video stream.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Receives the z-order value.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Sets the position of a video stream within the composition rectangle.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The coordinates of the bounding rectangle given in pnrcOutput are not valid. |
| Invalid stream identifier. |
?
The mixer draws each video stream inside a bounding rectangle that is specified relative to the final video image. This bounding rectangle is given in normalized coordinates. For more information, see
The coordinates of the bounding rectangle must fall within the range [0.0, 1.0]. Also, the X and Y coordinates of the upper-left corner cannot exceed the X and Y coordinates of the lower-right corner. In other words, the bounding rectangle must fit entirely within the composition rectangle and cannot be flipped vertically or horizontally.
The following diagram shows how the EVR mixes substreams.
The output rectangle for the stream is specified by calling SetStreamOutputRect. The source rectangle is specified by calling
Retrieves the position of a video stream within the composition rectangle.
-The identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Controls preferences for video deinterlacing.
The default video mixer for the Enhanced Video Renderer (EVR) implements this interface.
To get a reference to the interface, call
Gets or sets the current preferences for video deinterlacing.
-Sets the preferences for video deinterlacing.
-Bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets the current preferences for video deinterlacing.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Maps a position on an input video stream to the corresponding position on an output video stream.
To obtain a reference to this interface, call
Maps output image coordinates to input image coordinates. This method provides the reverse transformation for components that map coordinates on the input image to different coordinates on the output image.
-X-coordinate of the output image, normalized to the range [0...1].
Y-coordinate of the output image, normalized to the range [0...1].
Output stream index for the coordinate mapping.
Input stream index for the coordinate mapping.
Receives the mapped x-coordinate of the input image, normalized to the range [0...1].
Receives the mapped y-coordinate of the input image, normalized to the range [0...1].
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
In the following diagram, R(dest) is the destination rectangle for the video. You can obtain this rectangle by calling
The position of P relative to R(dest) in normalized coordinates is calculated as follows:
float xn = float(x + 0.5) / widthDest;
- float xy = float(y + 0.5) / heightDest;
-
where widthDest and heightDest are the width and height of R(dest) in pixels.
To calculate the position of P relative to R1, call MapOutputCoordinateToInputStream as follows:
float x1 = 0, y1 = 0;
- hr = pMap->MapOutputCoordinateToInputStream(xn, yn, 0, dwInputStreamIndex, &x1, &y1);
The values returned in x1 and y1 are normalized to the range [0...1]. To convert back to pixel coordinates, scale these values by the size of R1:
int scaledx = int(floor(x1 * widthR1));
- int scaledy = int(floor(xy * heightR1));
Note that x1 and y1 might fall outside the range [0...1] if P lies outside of R1.
-Represents a video presenter. A video presenter is an object that receives video frames, typically from a video mixer, and presents them in some way, typically by rendering them to the display. The enhanced video renderer (EVR) provides a default video presenter, and applications can implement custom presenters.
The video presenter receives video frames as soon as they are available from upstream. The video presenter is responsible for presenting frames at the correct time and for synchronizing with the presentation clock.
-Configures the Video Processor MFT.
-This interface controls how the Video Processor MFT generates output frames.
-Sets the border color.
-Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
-See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
-See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Specifies whether to flip the video image.
-Specifies whether to rotate the video to the correct orientation.
-The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
-Sets the border color.
-A reference to an
If this method succeeds, it returns
Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
-A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
-A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Specifies whether to flip the video image.
-An
If this method succeeds, it returns
Specifies whether to rotate the video to the correct orientation.
-A
If this method succeeds, it returns
The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
-The sampling size. To disable constriction, set this parameter to
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Configures the Video Processor MFT.
-This interface controls how the Video Processor MFT generates output frames.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
-Rotation value in degrees. Typically, you can only use values from the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables effects that were implemented with IDirectXVideoProcessor::VideoProcessorBlt.
-If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
-A combination of
If this method succeeds, it returns
Sets a new mixer or presenter for the Enhanced Video Renderer (EVR).
Both the EVR media sink and the DirectShow EVR filter implement this interface. To get a reference to the interface, call QueryInterface on the media sink or the filter. Do not use
The EVR activation object returned by the
Sets a new mixer or presenter for the enhanced video renderer (EVR).
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Either the mixer or the presenter is invalid. |
| The mixer and presenter cannot be replaced in the current state. (EVR media sink.) |
| The video renderer has been shut down. |
| One or more input pins are connected. (DirectShow EVR filter.) |
?
Call this method directly after creating the EVR, before you do any of the following:
Call
Call
Connect any pins on the EVR filter, or set any media types on EVR media sink.
The EVR filter returns VFW_E_WRONG_STATE if any of the filter's pins are connected. The EVR media sink returns
The device identifiers for the mixer and the presenter must match. The
If the video renderer is in the protected media path (PMP), the mixer and presenter objects must be certified safe components and pass any trust authority verification that is being enforced. Otherwise, this method will fail.
-Allocates video samples for a video media sink.
The stream sinks on the enhanced video renderer (EVR) expose this interface as a service. To obtain a reference to the interface, call
Specifies the Direct3D device manager for the video media sink to use.
-The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
-
Specifies the Direct3D device manager for the video media sink to use.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
-
Releases all of the video samples that have been allocated.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies the number of samples to allocate and the media type for the samples.
-Number of samples to allocate.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid media type. |
?
Gets a video sample from the allocator.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The allocator was not initialized. Call |
| No samples are available. |
?
Enables an application to track video samples allocated by the enhanced video renderer (EVR).
The stream sinks on the EVR expose this interface as a service. To get a reference to the interface, call the
Sets the callback object that receives notification whenever a video sample is returned to the allocator.
-To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
-Sets the callback object that receives notification whenever a video sample is returned to the allocator.
-A reference to the
If this method succeeds, it returns
To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
-Gets the number of video samples that are currently available for use.
-Receives the number of available samples.
If this method succeeds, it returns
To get a video sample from the allocator, call the
Allocates video samples that contain Microsoft Direct3D?11 texture surfaces.
-You can use this interface to allocateDirect3D?11 video samples, rather than allocate the texture surfaces and media samples directly. To get a reference to this interface, call the
To allocate video samples, perform the following steps:
Initializes the video sample allocator object.
-The initial number of samples to allocate.
The maximum number of samples to allocate.
A reference to the
A reference to the
If this method succeeds, it returns
The callback for the
Called when a video sample is returned to the allocator.
-If this method succeeds, it returns
To get a video sample from the allocator, call the
The callback for the
Called when allocator samples are released for pruning by the allocator, or when the allocator is removed.
-The sample to be pruned.
If this method succeeds, it returns
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-Call this method when the
Registers the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-A reference to the
A reference to the
If this method succeeds, it returns
Each source node in the topology defines one branch of the topology. The branch includes every topology node that receives data from that node. An application can assign each branch of a topology its own work queue and then associate those work queues with MMCSS tasks.
To use this method, perform the following steps.
The BeginRegisterTopologyWorkQueuesWithMMCSS method is asynchronous. When the operation completes, the callback object's
To unregister the topology work queues from MMCSS, call
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Unregisters the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified branch of the current topology.
-Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no work queue with the specified identifier. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified branch of the current topology.
-Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Associates a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
- The platform work queue to register with MMCSS. See Work Queue Identifiers. To register all of the standard work queues to the same MMCSS task, set this parameter to
The name of the MMCSS task to be performed.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
If this method succeeds, it returns
This method is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS class, call
Completes an asynchronous request to associate a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The unique task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Platform work queue to register with MMCSS. See
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified platform work queue.
-Platform work queue to query. See
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified platform work queue.
-Platform work queue to query. See
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Extends the
This interface allows applications to control - both platform and topology work queues.
The
Retrieves the Multimedia Class Scheduler Service (MMCSS) string associated with the given topology work queue.
-The id of the topology work queue.
Pointer to the buffer the work queue's MMCSS task id will be copied to.
If this method succeeds, it returns
Registers a platform work queue with Multimedia Class Scheduler Service (MMCSS) using the specified class and task id.
-The id of one of the standard platform work queues.
The MMCSS class which the work queue should be registered with.
The task id which the work queue should be registered with. If dwTaskId is 0, a new MMCSS bucket will be created.
The priority.
Standard callback used for async operations in Media Foundation.
Standard state used for async operations in Media Foundation.
If this method succeeds, it returns
Gets the priority of the Multimedia Class Scheduler Service (MMCSS) priority associated with the specified platform work queue.
-Topology work queue id for which the info will be returned.
Pointer to a buffer allocated by the caller that the work queue's MMCSS task id will be copied to.
Contains an image that is stored as metadata for a media source. This structure is used as the data item for the WM/Picture metadata attribute.
-The WM/Picture attribute is defined in the Windows Media Format SDK. The attribute contains a picture related to the content, such as album art.
To get this attribute from a media source, call
Image data.
This format differs from the WM_PICTURE structure used in the Windows Media Format SDK. The WM_PICTURE structure contains internal references to two strings and the image data. If the structure is copied, these references become invalid. The
Contains synchronized lyrics stored as metadata for a media source. This structure is used as the data item for the WM/Lyrics_Synchronised metadata attribute.
-The WM/Lyrics_Synchronised attribute is defined in the Windows Media Format SDK. The attribute contains lyrics synchronized to times in the source file.
To get this attribute from a media source, call
Null-terminated wide-character string that contains a description.
Lyric data. The format of the lyric data is described in the Windows Media Format SDK documentation.
This format differs from the WM_SYNCHRONISED_LYRICS structure used in the Windows Media Format SDK. The WM_SYNCHRONISED_LYRICS structure contains internal references to two strings and the lyric data. If the structure is copied, these references become invalid. The
Specifies the format of time stamps in the lyrics. This member is equivalent to the bTimeStampFormat member in the WM_SYNCHRONISED_LYRICS structure. The WM_SYNCHRONISED_LYRICS structure is documented in the Windows Media Format SDK.
Specifies the type of synchronized strings that are in the lyric data. This member is equivalent to the bContentType member in the WM_SYNCHRONISED_LYRICS structure.
Size, in bytes, of the lyric data.
Describes the indexing configuration for a stream and type of index.
-
Number of bytes used for each index entry. If the value is MFASFINDEXER_PER_ENTRY_BYTES_DYNAMIC, the index entries have variable size.
Optional text description of the index.
Indexing interval. The units of this value depend on the index type. A value of MFASFINDEXER_NO_FIXED_INTERVAL indicates that there is no fixed indexing interval.
Specifies an index for the ASF indexer object.
-The index object of an ASF file can contain a number of distinct indexes. Each index is identified by the type of index and the stream number. No ASF index object can contain more than one index for a particular combination of stream number and index type.
-The type of index. Currently this value must be GUID_NULL, which specifies time-based indexing.
The stream number to which this structure applies.
Contains statistics about the progress of the ASF multiplexer.
-Use
Number of frames written by the ASF multiplexer.
Number of frames dropped by the ASF multiplexer.
Describes a 4:4:4:4 Y'Cb'Cr' sample.
-Cr (chroma difference) value.
Cb (chroma difference) value.
Y (luma) value.
Alpha value.
Specifies the buffering parameters for a network byte stream.
-Size of the file, in bytes. If the total size is unknown, set this member to -1.
Size of the playable media data in the file, excluding any trailing data that is not useful for playback. If this value is unknown, set this member to -1.
Pointer to an array of
The number of elements in the prgBuckets array.
Amount of data to buffer from the network, in 100-nanosecond units. This value is in addition to the buffer windows defined in the prgBuckets member.
Amount of additional data to buffer when seeking, in 100-nanosecond units. This value reflects the fact that downloading must start from the previous key frame before the seek point. If the value is unknown, set this member to zero.
The playback duration of the file, in 100-nanosecond units. If the duration is unknown, set this member to zero.
Playback rate.
Specifies a range of bytes.
-The offset, in bytes, of the start of the range.
The offset, in bytes, of the end of the range.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A transform describing the location of a camera relative to other cameras or an established external reference.
-The Position value should be expressed in real-world coordinates in units of meters. The coordinate system of both position and orientation should be right-handed Cartesian as shown in the following diagram.
Important??The position and orientation are expressed as transforms toward the reference frame or origin. For example, a Position value of {-5, 0, 0} means that the origin is 5 meters to the left of the sensor, and therefore the sensor is 5 meters to the right of the origin. A sensor that is positioned 2 meters above the origin should specify a Position of {0, -2, 0} because that is the translation from the sensor to the origin.
If the sensor is aligned with the origin, the rotation is the identity quaternion and the forward vector is along the -Z axis {0, 0, -1}. If the sensor is rotated +30 degrees around the Y axis from the origin, then the Orientation value should be a rotation of -30 degrees around the Y axis, because it represents the rotation from the sensor to the origin.
? -A reference
The transform position.
The transform rotation.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes the location of a camera relative to other cameras or an established external reference.
-The number of transforms in the CalibratedTransforms array.
The array of transforms in the extrinsic data.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a polynomial lens distortion model.
-The first radial distortion coefficient.
The second radial distortion coefficient.
The third radial distortion coefficient.
The first tangential distortion coefficient.
The second tangential distortion coefficient.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera model.
-For square pixels, the X and Y fields of the FocalLength should be the same.
The PrincipalPoint field is expressed in pixels, not in normalized coordinates. The origin [0,0] is the bottom, left corner of the image.
-The focal length of the camera.
The principal point of the camera.
This structure contains blob information for the EV compensation feedback for the photo captured.
-A KSCAMERA_EXTENDEDPROP_EVCOMP_XXX step flag.
The EV compensation value in units of the step specified.
The CapturedMetadataISOGains structure describes the blob format for MF_CAPTURE_METADATA_ISO_GAINS.
-The CapturedMetadataISOGains structure only describes the blob format for the MF_CAPTURE_METADATA_ISO_GAINS attribute. The metadata item structure for ISO gains (KSCAMERA_METADATA_ITEMHEADER + ISO gains metadata payload) is up to driver and must be 8-byte aligned.
-This structure describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute.
-The MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute contains the white balance gains applied to R, G, B by the sensor or ISP when the preview frame was captured. This is a unitless.
The CapturedMetadataWhiteBalanceGains structure only describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute. The metadata item structure for white balance gains (KSCAMERA_METADATA_ITEMHEADER + white balance gains metadata payload) is up to driver and must be 8-byte aligned.
-The R value of the blob.
The G value of the blob.
The B value of the blob.
Defines the properties of a clock.
- The interval at which the clock correlates its clock time with the system time, in 100-nanosecond units. If the value is zero, the correlation is made whenever the
The unique identifier of the underlying device that provides the time. If two clocks have the same unique identifier, they are based on the same device. If the underlying device is not shared between two clocks, the value can be GUID_NULL.
A bitwise OR of flags from the
The clock frequency in Hz. A value of MFCLOCK_FREQUENCY_HNS means that the clock has a frequency of 10 MHz (100-nanosecond ticks), which is the standard MFTIME time unit in Media Foundation. If the
The amount of inaccuracy that may be present on the clock, in parts per billion (ppb). For example, an inaccuracy of 50 ppb means the clock might drift up to 50 seconds per billion seconds of real time. If the tolerance is not known, the value is MFCLOCK_TOLERANCE_UNKNOWN. This constant is equal to 50 parts per million (ppm).
The amount of jitter that may be present, in 100-nanosecond units. Jitter is the variation in the frequency due to sampling the underlying clock. Jitter does not include inaccuracies caused by drift, which is reflected in the value of dwClockTolerance.
For clocks based on a single device, the minimum jitter is the length of the tick period (the inverse of the frequency). For example, if the frequency is 10 Hz, the jitter is 0.1 second, which is 1,000,000 in MFTIME units. This value reflects the fact that the clock might be sampled just before the next tick, resulting in a clock time that is one period less than the actual time. If the frequency is greater than 10 MHz, the jitter should be set to 1 (the minimum value).
If a clock's underlying hardware device does not directly time stamp the incoming data, the jitter also includes the time required to dispatch the driver's interrupt service routine (ISR). In that case, the expected jitter should include the following values:
Value | Meaning |
---|---|
| Jitter due to time stamping during the device driver's ISR. |
| Jitter due to time stamping during the deferred procedure call (DPC) processing. |
| Jitter due to dropping to normal thread execution before time stamping. |
?
Contains information about the data that you want to provide as input to a protection system function.
-The identifier of the function that you need to run. This value is defined by the implementation of the protection system.
The size of the private data that the implementation of the security processor implementation reserved. You can determine this value by calling the
The size of the data provided as input to the protection system function that you want to run.
Reserved.
The data to provide as input to the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for use by the independent hardware vendor (IHV). Bytes PrivateDataByteCount through HWProtectionDataByteCount + PrivateDataByteCount - 1 contain the input data for the protection system function.
The protection system specification defines the format and size of the DRM function.
Contains information about the data you received as output from a protection system function.
-The size of the private data that the implementation of the security processor reserves, in bytes. You can determine this value by calling the
The maximum size of data that the independent hardware vendor (IHV) can return in the output buffer, in bytes.
The size of the data that the IHV wrote to the output buffer, in bytes.
The result of the protection system function.
The number of 100 nanosecond units spent transporting the data.
The number of 100 nanosecond units spent running the protection system function. -
The output of the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for IHV use. Bytes PrivateDataByteCount through MaxHWProtectionDataByteCount + PrivateDataByteCount - 1 contain the region of the array into which the driver should return the output data from the protection system function.
The protection system specification defines the format and size of the function.
Advises the secure processor of the Multimedia Class Scheduler service (MMCSS) parameters so that real-time tasks can be scheduled at the expected priority.
-The identifier for the MMCSS task.
The name of the MMCSS task.
The base priority of the thread that runs the MMCSS task.
The
This structure is identical to the DirectShow
Major type
Subtype
If TRUE, samples are of a fixed size. This field is informational only. For audio, it is generally set to TRUE. For video, it is usually TRUE for uncompressed video and
If TRUE, samples are compressed using temporal (interframe) compression. (A value of TRUE indicates that not all frames are key frames.) This field is informational only.
Size of the sample in bytes. For compressed data, the value can be zero.
Format type | Format structure |
---|---|
| DVINFO |
| |
| |
| None. |
| |
| |
| |
?
Not used. Set to
Size of the format block of the media type.
Pointer to the format structure. The structure type is specified by the formattype member. The format structure must be present, unless formattype is GUID_NULL or FORMAT_None.
The FaceCharacterization structure describes the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
-The MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute contains the blink and facial expression state for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that does not support blink or facial expression detection, this attribute should be omitted.
The facial expressions that can be detected are defined as follows:
#define MF_METADATAFACIALEXPRESSION_SMILE 0x00000001
The FaceCharacterizationBlobHeader and FaceCharacterization structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute. The metadata item structure for the face characterizations (KSCAMERA_METADATA_ITEMHEADER + face characterizations metadata payload) is up to driver and must be 8-byte aligned.
-0 indicates no blink for the left eye, 100 indicates definite blink for the left eye (0 - 100).
0 indicates no blink for the right eye, 100 indicates definite blink for the right eye (0 - 100).
A defined facial expression value.
0 indicates no such facial expression as identified, 100 indicates definite such facial expression as defined (0 - 100).
The FaceCharacterizationBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
-Size of this header + all following FaceCharacterization structures.
Number of FaceCharacterization structures in the blob. Must match the number of FaceRectInfo structures in FaceRectInfoBlobHeader.
The FaceRectInfo structure describes the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
-The MF_CAPTURE_METADATA_FACEROIS attribute contains the face rectangle info detected by the driver. By default driver\MFT0 should provide the face information on preview stream. If the driver advertises the capability on other streams, driver\MFT must provide the face info on the corresponding streams if the application enables face detection on those streams. When video stabilization is enabled on the driver, the face information should be provided post-video stabilization. The dominate face must be the first FaceRectInfo in the blob.
The FaceRectInfoBlobHeader and FaceRectInfo structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute. The metadata item structure for face ROIs (KSCAMERA_METADATA_ITEMHEADER + face ROIs metadata payload) is up to driver and must be 8-byte aligned.
-Relative coordinates on the frame that face detection is running (Q31 format).
Confidence level of the region being a face (0 - 100).
The FaceRectInfoBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
-Size of this header + all following FaceRectInfo structures.
Number of FaceRectInfo structures in the blob.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with two components.
-X component of the vector.
Y component of the vector.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with three components.
-X component of the vector.
Y component of the vector.
Z component of the vector.
Contains coefficients used to transform multichannel audio into a smaller number of audio channels. This process is called fold-down.
-To specify this information in the media type, set the
The ASF media source supports fold-down from six channels (5.1 audio) to two channels (stereo). It gets the information from the g_wszFold6To2Channels3 attribute in the ASF header. This attribute is documented in the Windows Media Format SDK documentation.
-Size of the structure, in bytes.
Number of source channels.
Number of destination channels.
Specifies the assignment of audio channels to speaker positions in the transformed audio. This member is a bitwise OR of flags that define the speaker positions. For a list of valid flags, see
Array that contains the fold-down coefficients. The number of coefficients is cSrcChannels?cDstChannels. If the number of coefficients is less than the size of the array, the remaining elements in the array are ignored. For more information about how the coefficients are applied, see Windows Media Audio Professional Codec Features.
The HistogramBlobHeader structure describes the blob size and the number of histograms in the blob for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
-Size of the entire histogram blob in bytes.
Number of histograms in the blob. Each histogram is identified by a HistogramHeader.
The HistogramDataHeader structure describes the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
-Size in bytes of this header + all following histogram data.
Mask of the color channel for the histogram data.
1 if linear, 0 if nonlinear.
The HistogramGrid structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
-Width of the sensor output that histogram is collected from.
Height of the sensor output that histogram is collected from.
Absolute coordinates of the region on the sensor output that the histogram is collected for.
The HistogramHeader structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
-The MF_CAPTURE_METADATA_HISTOGRAM attribute contains a histogram when a preview frame is captured.
For the ChannelMasks field, the following bitmasks indicate the available channels in the histogram:
#define MF_HISTOGRAM_CHANNEL_Y 0x00000001 - #define MF_HISTOGRAM_CHANNEL_R 0x00000002 - #define MF_HISTOGRAM_CHANNEL_G 0x00000004 - #define MF_HISTOGRAM_CHANNEL_B 0x00000008 - #define MF_HISTOGRAM_CHANNEL_Cb 0x00000010 - #define MF_HISTOGRAM_CHANNEL_Cr 0x00000020
Each blob can contain multiple histograms collected from different regions or different color spaces of the same frame. Each histogram in the blob is identified by its own HistogramHeader. Each histogram has its own region and sensor output size associated. For full frame histogram, the region will match the sensor output size specified in HistogramGrid.
Histogram data for all available channels are grouped under one histogram. Histogram data for each channel is identified by a HistogramDataHeader immediate above the data. ChannelMasks indicate how many and what channels are having the histogram data, which is the bitwise OR of the supported MF_HISTOGRAM_CHANNEL_* bitmasks as defined above. ChannelMask indicates what channel the data is for, which is identified by any one of the MF_HISTOGRAM_CHANNEL_* bitmasks.
Histogram data is an array of ULONG with each entry representing the number of pixels falling under a set of tonal values as categorized by the bin. The data in the array should start from bin 0 to bin N-1, where N is the number of bins in the histogram, for example, HistogramBlobHeader.Bins.
For Windows?10, if KSPROPERTY_CAMERACONTROL_EXTENDED_HISTOGRAM is supported, at minimum a full frame histogram with Y channel must be provided which should be the first histogram in the histogram blob. - Note that HistogramBlobHeader, HistogramHeader, HistogramDataHeader and Histogram data only describe the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute. The metadata item structure for the histogram (KSCAMERA_METADATA_ITEMHEADER + all histogram metadata payload) is up to driver and must be 8-byte aligned.
-Size of this header + (HistogramDataHeader + histogram data following) * number of channels available.
Number of bins in the histogram.
Color space that the histogram is collected from
Masks of the color channels that the histogram is collected for.
Grid that the histogram is collected from.
Describes an action requested by an output trust authority (OTA). The request is sent to an input trust authority (ITA).
-Specifies the action as a member of the
Pointer to a buffer that contains a ticket object, provided by the OTA.
Size of the ticket object, in bytes.
Contains parameters for the
Specifies the buffering requirements of a file.
-This structure describes the buffering requirements for content encoded at the bit rate specified in the dwBitrate. The msBufferWindow member indicates how much data should be buffered before starting playback. The size of the buffer in bytes is msBufferWinow?dwBitrate / 8000.
-Bit rate, in bits per second.
Size of the buffer window, in milliseconds.
The MetadataTimeStamps structure describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute.
-The MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute contains the time stamp information for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that cannot provide the time stamp for face ROIs, this attribute should be omitted.
For the Flags field, the following bit flags indicate which time stamp is valid:
#define MF_METADATATIMESTAMPS_DEVICE 0x00000001 - #define MF_METADATATIMESTAMPS_PRESENTATION 0x00000002
MFT0 must set Flags to MF_METADATATIEMSTAMPS_DEVICE and the appropriate QPC time for Device, if the driver provides the timestamp metadata for the face ROIs.
The MetadataTimeStamps structure only describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute. The metadata item structure for timestamp (KSCAMERA_METADATA_ITEMHEADER + timestamp metadata payload) is up to driver and must be 8-byte aligned.
-Bitwise OR of the MF_METADATATIMESTAMPS_* flags.
QPC time for the sample the face rectangle is derived from (in 100ns).
PTS for the sample the face rectangle is derived from (in 100ns).
Provides information on a screen-to-screen move and a dirty rectangle copy operation.
-A
A
Contains encoding statistics from the Digital Living Network Alliance (DLNA) media sink.
This structure is used with the
Contains format data for a binary stream in an Advanced Streaming Format (ASF) file.
-This structure is used with the
This structure corresponds to the first 60 bytes of the Type-Specific Data field of the Stream Properties Object, in files where the stream type is ASF_Binary_Media. For more information, see the ASF specification.
The Format Data field of the Type-Specific Data field is contained in the
Major media type. This value is the
Media subtype.
If TRUE, samples have a fixed size in bytes. Otherwise, samples have variable size.
If TRUE, the data in this stream uses temporal compression. Otherwise, samples are independent of each other.
If bFixedSizeSamples is TRUE, this member specifies the sample size in bytes. Otherwise, the value is ignored and should be 0.
Format type
Defines custom color primaries for a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
-This structure is used with the
Red x-coordinate.
Red y-coordinate.
Green x-coordinate.
Green y-coordinate.
Blue x-coordinate.
Blue y-coordinate.
White point x-coordinate.
White point y-coordinate.
Contains the authentication information for the credential manager.
-The response code of the authentication challenge. For example, NS_E_PROXY_ACCESSDENIED.
Set this flag to TRUE if the currently logged on user's credentials should be used as the default credentials.
If TRUE, the authentication package will send unencrypted credentials over the network. Otherwise, the authentication package encrypts the credentials.
The original URL that requires authentication.
The name of the site or proxy that requires authentication.
The name of the realm for this authentication.
The name of the authentication package. For example, "Digest" or "MBS_BASIC".
The number of times that the credential manager should retry after authentication fails.
Specifies an offset as a fixed-point real number.
-The value of the number is value + (fract / 65536.0f).
-The fractional part of the number.
The integer part of the number.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If the flags member contains the
To cancel authentication, set fProceedWithAuthentication equal to
By default, MFPlay uses the network source's implementation of
Contains one palette entry in a color table.
-This union can be used to represent both RGB palettes and Y'Cb'Cr' palettes. The video format that defines the palette determines which union member should be used.
-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
This event is not used to signal the failure of an asynchronous
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains information that is common to every type of MFPlay event.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera intrinsic model for a specified resolution.
-The width for the pinhole camera intrinsic model.
The height for the pinhole camera intrinsic model.
The pinhole camera model.
The lens distortion model.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Contains zero or 1 pinhole camera intrinsic models that describe how to project a 3D point in physical world onto the 2D image frame of a camera.
-The number of camera intrinsic models in the IntrinsicModels array.
The array of camera intrinsic models in the intrinsic data.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Media items are created asynchronously. If multiple items are created, the operations can complete in any order, not necessarily in the same order as the method calls. You can use the dwUserData member to identify the items, if you have simultaneous requests pending.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If one or more streams could not be connected to a media sink, the event property store contains the MFP_PKEY_StreamRenderingResults property. The value of the property is an array of
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If MFEventType is
Property | Description |
---|---|
MFP_PKEY_StreamIndex | The index of the stream whose format changed. |
?
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A four dimensional vector, used to represent a rotation.
-X component of the vector.
Y component of the vector.
Z component of the vector.
W component of the vector.
Represents a ratio.
-Numerator of the ratio.
Denominator of the ratio.
Defines a regions of interest.
-The bounds of the region.
Specifies the quantization parameter delta for the specified region from the rest of the frame.
Contains information about a revoked component.
-Specifies the reason for the revocation. The following values are defined.
Value | Meaning |
---|---|
| A boot driver could not be verified. |
| A certificate in a trusted component's certificate chain was revoked. |
| The high-security certificate for authenticating the protected environment (PE) was revoked. The high-security certificate is typically used by ITAs that handle high-definition content and next-generation formats such as HD-DVD. |
| A certificate's extended key usage (EKU) object is invalid. |
| The root certificate is not valid. |
| The low-security certificate for authenticating the PE was revoked. The low-security certificate is typically used by ITAs that handle standard-definition content and current-generation formats. |
| A trusted component was revoked. |
| The GRL was not found. |
| Could not load the global revocation list (GRL). |
| The GRL signature is invalid. |
| A certificate chain was not well-formed, or a boot driver is unsigned or is signed with an untrusted certificate. |
| A component was signed by a test certificate. |
?
In addition, one of the following flags might be present, indicating the type of component that failed to load.
Value | Meaning |
---|---|
| User-mode component. |
| Kernel-mode component. |
?
Contains a hash of the file header.
Contains a hash of the public key in the component's certificate.
File name of the revoked component.
Contains information about one or more revoked components.
-Revocation information version.
Number of elements in the pRRComponents array.
Array of
Contains statistics about the performance of the sink writer.
-The size of the structure, in bytes.
The time stamp of the most recent sample given to the sink writer. The sink writer updates this value each time the application calls
The time stamp of the most recent sample to be encoded. The sink writer updates this value whenever it calls
The time stamp of the most recent sample given to the media sink. The sink writer updates this value whenever it calls
The time stamp of the most recent stream tick. The sink writer updates this value whenever the application calls
The system time of the most recent sample request from the media sink. The sink writer updates this value whenever it receives an
The number of samples received.
The number of samples encoded.
The number of samples given to the media sink.
The number of stream ticks received.
The amount of data, in bytes, currently waiting to be processed.
The total amount of data, in bytes, that has been sent to the media sink.
The number of pending sample requests.
The average rate, in media samples per 100-nanoseconds, at which the application sent samples to the sink writer.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the encoder.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the media sink.
Not for application use.
-This structure is used internally by the Microsoft Media Foundation AVStream proxy.
-Reserved.
Reserved.
Contains information about an input stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid are the
The
The
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetInputStreamInfo method are valid. They might change if you set different media types.
-Specifies a new attribute value for a topology node.
- Due to an error in the structure declaration, the u64 member is declared as a 32-bit integer, not a 64-bit integer. Therefore, any 64-bit value passed to the
The identifier of the topology node to update. To get the identifier of a topology node, call
Attribute type, specified as a member of the
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (floating point). This member is used when attrType equals
Contains information about an output buffer for a Media Foundation transform. This structure is used in the
You must provide an
MFTs can support two different allocation models for output samples:
To find which model the MFT supports for a given output stream, call
Flag | Allocation Model |
---|---|
The MFT allocates the output samples for the stream. Set pSample to | |
The MFT supports both allocation models. | |
Neither (default) | The client must allocate the output samples for the stream. |
?
The behavior of ProcessOutput depends on the initial value of pSample and the value of the dwFlags parameter in the ProcessOutput method.
If pSample is
Restriction: This output stream must have the
If pSample is
Restriction: This output stream must have the
If pSample is non-
Restriction: This output stream must not have the
Any other combinations are invalid and cause ProcessOutput to return E_INVALIDARG.
Each call to ProcessOutput can produce zero or more events and up to one sample per output stream.
-
Contains information about an output stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid is the
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetOutputStreamInfo method are valid. They might change if you set different media types.
-Contains information about the audio and video streams for the transcode sink activation object.
To get the information stored in this structure, call
The
Contains media type information for registering a Media Foundation transform (MFT).
-The major media type. For a list of possible values, see Major Media Types.
The media subtype. For a list of possible values, see the following topics:
Contains parameters for the
Specifies a rectangular area within a video frame.
- An
An
A
Contains information about a video compression format. This structure is used in the
For uncompressed video formats, set the structure members to zero.
-
Describes a video format.
-Applications should avoid using this structure. Instead, it is recommended that applications use attributes to describe the video format. For a list of media type attributes, see Media Type Attributes. With attributes, you can set just the format information that you know, which is easier (and more likely to be accurate) than trying to fill in complete format information for the
To initialize a media type object from an
You can use the
Size of the structure, in bytes. This value includes the size of the palette entries that may appear after the surfaceInfo member.
Video subtype. See Video Subtype GUIDs.
Contains video format information that applies to both compressed and uncompressed formats.
This structure is used in the
Developers are encouraged to use media type attributes instead of using the
Structure Member | Media Type Attribute |
---|---|
dwWidth, dwHeight | |
PixelAspectRatio | |
SourceChromaSubsampling | |
InterlaceMode | |
TransferFunction | |
ColorPrimaries | |
TransferMatrix | |
SourceLighting | |
FramesPerSecond | |
NominalRange | |
GeometricAperture | |
MinimumDisplayAperture | |
PanScanAperture | |
VideoFlags | See |
?
-
Defines a normalized rectangle, which is used to specify sub-rectangles in a video rectangle. When a rectangle N is normalized relative to some other rectangle R, it means the following:
The coordinate (0.0, 0.0) on N is mapped to the upper-left corner of R.
The coordinate (1.0, 1.0) on N is mapped to the lower-right corner of R.
Any coordinates of N that fall outside the range [0...1] are mapped to positions outside the rectangle R. A normalized rectangle can be used to specify a region within a video rectangle without knowing the resolution or even the aspect ratio of the video. For example, the upper-left quadrant is defined as {0.0, 0.0, 0.5, 0.5}.
-X-coordinate of the upper-left corner of the rectangle.
Y-coordinate of the upper-left corner of the rectangle.
X-coordinate of the lower-right corner of the rectangle.
Y-coordinate of the lower-right corner of the rectangle.
Contains information about an uncompressed video format. This structure is used in the
Applies to: desktop apps | Metro style apps
Initializes Microsoft Media Foundation.
- An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Contains information about an XAPO for use in an effect chain.
-XAPO instances are passed to XAudio2 as
For additional information on using XAPOs with XAudio2 see How to: Create an Effect Chain and How to: Use an XAPO in XAudio2.
-The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods via an interface reference provided by the client, using the XAudio2Create method. Methods in this interface return void, rather than an
See XAudio2 Callbacks for restrictions on callback implementation.
Describes I3DL2 (Interactive 3D Audio Rendering Guidelines Level 2.0) parameters for use in the ReverbConvertI3DL2ToNative function.
-There are many preset values defined for the
Describes parameters for use in the reverb APO.
-All parameters related to sampling rate or time are relative to a 48kHz voice and must be scaled for use with other sampling rates. For example, setting ReflectionsDelay to 300ms gives a true 300ms delay when the reverb is hosted in a 48kHz voice, but becomes a 150ms delay when hosted in a 24kHz voice.
-Percentage of the output that will be reverb. Allowable values are from 0 to 100.
The delay time of the first reflection relative to the direct path. Permitted range is from 0 to 300 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay of reverb relative to the first reflection. Permitted range is from 0 to 85 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left rear output and right rear output. Permitted range is from 0 to 5 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left side output and right side output. Permitted range is from 0 to 5 milliseconds.
Note??This value is supported beginning with Windows?10. ? Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Position of the left input within the simulated space relative to the listener. With PositionLeft set to the minimum value, the left input is placed close to the listener. In this position, early reflections are dominant, and the reverb decay is set back in the sound field and reduced in amplitude. With PositionLeft set to the maximum value, the left input is placed at a maximum distance from the listener within the simulated room. PositionLeft does not affect the reverb decay time (liveness of the room), only the apparent position of the source relative to the listener. Permitted range is from 0 to 30 (no units).
Same as PositionLeft, but affecting only the right input. Permitted range is from 0 to 30 (no units).
Note??PositionRight is ignored in mono-in/mono-out mode. ?Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Note??PositionMatrixRight is ignored in mono-in/mono-out mode. ?Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units).
Controls the character of the individual wall reverberations. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units). -
Adjusts the decay time of low frequencies relative to the decay time at 1 kHz. The values correspond to dB of gain as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 | +1 | +2 | +3 | +4 |
?
Note??A LowEQGain value of 8 results in the decay time of low frequencies being equal to the decay time at 1 kHz. ?Permitted range is from 0 to 12 (no units).
Sets the corner frequency of the low pass filter that is controlled by the LowEQGain parameter. The values correspond to frequency in Hz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|---|
Frequency (Hz) | 50 | 100 | 150 | 200 | 250 | 300 | 350 | 400 | 450 | 500 |
?
Permitted range is from 0 to 9 (no units).
Adjusts the decay time of high frequencies relative to the decay time at 1 kHz. When set to zero, high frequencies decay at the same rate as 1 kHz. When set to maximum value, high frequencies decay at a much faster rate than 1 kHz.
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 |
?
Permitted range is from 0 to 8 (no units).
Sets the corner frequency of the high pass filter that is controlled by the HighEQGain parameter. The values correspond to frequency in kHz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Frequency (kHz) | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 | 5.5 | 6 | 6.5 | 7 | 7.5 | 8 |
?
Permitted range is from 0 to 14 (no units).
Sets the corner frequency of the low pass filter for the room effect. Permitted range is from 20 to 20,000 Hz.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Sets the pass band intensity level of the low-pass filter for both the early reflections and the late field reverberation. Permitted range is from -100 to 0 dB.
Sets the intensity of the low-pass filter for both the early reflections and the late field reverberation at the corner frequency (RoomFilterFreq). Permitted range is from -100 to 0 dB.
Adjusts the intensity of the early reflections. Permitted range is from -100 to 20 dB.
Adjusts the intensity of the reverberations. Permitted range is from -100 to 20 dB.
Reverberation decay time at 1 kHz. This is the time that a full scale input signal decays by 60 dB. Permitted range is from 0.1 to infinity seconds.
Controls the modal density in the late field reverberation. For colorless spaces, Density should be set to the maximum value (100). As Density is decreased, the sound becomes hollow (comb filtered). This is an effect that can be useful if you are trying to model a silo. Permitted range as a percentage is from 0 to 100.
The apparent size of the acoustic space. Permitted range is from 1 to 100 feet.
If set to TRUE, disables late field reflection calculations. Disabling late field reflection calculations results in a significant CPU time savings.
Note??The DirectX SDK versions of XAUDIO2 don't support this member. ?Describes parameters for use with the volume meter APO.
-This structure is used with the XAudio2
pPeakLevels and pRMSLevels are not returned by
ChannelCount must be set by the application to match the number of channels in the voice the effect is applied to.
-Array that will be filled with the maximum absolute level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pPeakLevels may be
Array that will be filled with root mean square level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pRMSLevels may be
Number of channels being processed.
Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Indicates the filter type.
-Attenuates (reduces) frequencies above the cutoff frequency.
Attenuates frequencies outside a given range.
Attenuates frequencies below the cutoff frequency.
Attenuates frequencies inside a given range.
Attenuates frequencies above the cutoff frequency. This is a one-pole filter, and
Attenuates frequencies below the cutoff frequency. This is a one-pole filter, and
Contains information about the creation flags, input channels, and sample rate of a voice.
-Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
-Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
XAudio2 constants that specify default parameters, maximum values, and flags.
XAudio2 boundary values
-A mastering voice is used to represent the audio output device.
Data buffers cannot be submitted directly to mastering voices, but data submitted to other types of voices must be directed to a mastering voice to be heard. -
Returns the channel mask for this voice.
- Returns the channel mask for this voice. This corresponds to the dwChannelMask member of the
This method does not return a value.
The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the X3DAudioInitialize SpeakerChannelMask parameter.
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.? -// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Returns the channel mask for this voice. (Only valid for XAudio 2.8, returns 0 otherwise)
-The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.
-Use a source voice to submit audio data to the XAudio2 processing pipeline.You must send voice data to a mastering voice to be heard, either directly or through intermediate submix voices. -
-Returns the frequency adjustment ratio of the voice.
-GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
-The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay. -
-Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering voices, or to the output device.
-Flags that control how the voice is started. Must be 0.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the XAudio2 engine is stopped, the voice stops running. However, it remains in the started state, so that it starts running again as soon as the engine starts.
When first created, source voices are in the stopped state. Submix and mastering voices are in the started state.
After Start is called it has no further effect if called again before
Stops consumption of audio by the current voice.
-Flags that control how the voice is stopped. Can be 0 or the following:
Value | Description |
---|---|
Continue emitting effect output after the voice is stopped.? |
?
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
All source buffers that are queued on the voice and the current cursor position are preserved. This allows the voice to continue from where it left off, when it is restarted. The
By default, any pending output from voice effects?for example, reverb tails?is not played. Instead, the voice is immediately rendered silent. The
A voice stopped with the
Stop is always asynchronous, even if called within a callback.
Note??XAudio2 never calls any voice callbacks for a voice if the voice is stopped (even if it was stopped withAdds a new audio buffer to the voice queue.
- Pointer to an
Pointer to an additional
Returns
The voice processes and plays back the buffers in its queue in the order that they were submitted.
The
If the voice is started and has no buffers queued, the new buffer will start playing immediately. If the voice is stopped, the buffer is added to the voice's queue and will be played when the voice starts.
If only part of the given buffer should be played, the PlayBegin and PlayLength fields in the
If all or part of the buffer should be played in a continuous loop, the LoopBegin, LoopLength and LoopCount fields in
If an explicit play region is specified, it must begin and end within the given audio buffer (or, in the compressed case, within the set of samples that the buffer will decode to). In addition, the loop region cannot end past the end of the play region.
Xbox 360 |
---|
For certain audio formats, there may be additional restrictions on the valid endpoints of any play or loop regions; e.g. for XMA buffers, the regions can only begin or end at 128-sample boundaries in the decoded audio. - |
?
The pBuffer reference can be reused or freed immediately after calling this method, but the actual audio data referenced by pBuffer must remain valid until the buffer has been fully consumed by XAudio2 (which is indicated by the
Up to
SubmitSourceBuffer takes effect immediately when called from an XAudio2 callback with an OperationSet of
Xbox 360 |
---|
This method can be called from an Xbox system thread (most other XAudio2 methods cannot). However, a maximum of two source buffers can be submitted from a system thread at a time. |
?
-Removes all pending audio buffers from the voice queue.
-Returns
If the voice is started, the buffer that is currently playing is not removed from the queue.
FlushSourceBuffers can be called regardless of whether the voice is currently started or stopped.
For every buffer removed, an OnBufferEnd callback will be made, but none of the other per-buffer callbacks (OnBufferStart, OnStreamEnd or OnLoopEnd) will be made.
FlushSourceBuffers does not change a the voice's running state, so if the voice was playing a buffer prior to the call, it will continue to do so, and will deliver all the callbacks for the buffer normally. This means that the OnBufferEnd callback for this buffer will take place after the OnBufferEnd callbacks for the buffers that were removed. Thus, an XAudio2 client that calls FlushSourceBuffers cannot expect to receive OnBufferEnd callbacks in the order in which the buffers were submitted.
No warnings for starvation of the buffer queue will be emitted when the currently playing buffer completes; it is assumed that the client has intentionally removed the buffers that followed it. However, there may be an audio pop if this buffer does not end at a zero crossing. If the application must ensure that the flush operation takes place while a specific buffer is playing?perhaps because the buffer ends with a zero crossing?it must call FlushSourceBuffers from a callback, so that it executes synchronously.
Calling FlushSourceBuffers after a voice is stopped and then submitting new data to the voice resets all of the voice's internal counters.
A voice's state is not considered reset after calling FlushSourceBuffers until the OnBufferEnd callback occurs (if a buffer was previously submitted) or
Notifies an XAudio2 voice that no more buffers are coming after the last one that is currently in its queue.
-Returns
Discontinuity suppresses the warnings that normally occur in the debug build of XAudio2 when a voice runs out of audio buffers to play. It is preferable to mark the final buffer of a stream by tagging it with the
Because calling Discontinuity is equivalent to applying the
Stops looping the voice when it reaches the end of the current loop region.
-Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the cursor for the voice is not in a loop region, ExitLoop does nothing.
-Returns the voice's current state and cursor position data.
-Number of audio buffers currently queued on the voice, including the one that is processed currently.
For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Sets the frequency adjustment ratio of the voice.
-Frequency adjustment ratio. This value must be between
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Frequency adjustment is expressed as source frequency / target frequency. Changing the frequency ratio changes the rate audio is played on the voice. A ratio greater than 1.0 will cause the audio to play faster and a ratio less than 1.0 will cause the audio to play slower. Additionally, the frequency ratio affects the pitch of audio on the voice. As an example, a value of 1.0 has no effect on the audio, whereas a value of 2.0 raises pitch by one octave and 0.5 lowers it by one octave.
If SetFrequencyRatio is called specifying a Ratio value outside the valid range, the method will set the frequency ratio to the nearest valid value. A warning also will be generated for debug builds.
Note??Returns the frequency adjustment ratio of the voice.
-Returns the current frequency adjustment ratio if successful.
GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
-The new sample rate the voice should process submitted data at. Valid sample rates are 1kHz to 200kHz.
Returns
The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay. -
-A submix voice is used primarily for performance improvements and effects processing.
-Data buffers cannot be submitted directly to submix voices and will not be audible unless submitted to a mastering voice. A submix voice can be used to ensure that a particular set of voice data is converted to the same format and/or to have a particular effect chain processed on the collective result. -
Designates a new set of submix or mastering voices to receive the output of the voice.
-This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Gets the voice's filter parameters.
-GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the overall volume level for the voice.
-SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns information about the creation flags, input channels, and sample rate of a voice.
-
Designates a new set of submix or mastering voices to receive the output of the voice.
-Array of
Returns
This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Replaces the effect chain of the voice.
-Pointer to an
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
The number of output channels allowed for a voice's effect chain is locked at creation of the voice. If you create the voice with an effect chain, any new effect chain passed to SetEffectChain must have the same number of input and output channels as the original effect chain. If you create the voice without an effect chain, the number of output channels allowed for the effect chain will default to the number of input channels for the voice. If any part of effect chain creation fails, none of it is applied.
After you attach an effect to an XAudio2 voice, XAudio2 takes control of the effect, and the client should not make any further calls to it. The simplest way to ensure this is to release all references to the effect.
It is invalid to call SetEffectChain from within a callback (that is,
The
Enables the effect at a given position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Be careful when you enable an effect while the voice that hosts it is running. Such an action can result in a problem if the effect significantly changes the audio's pitch or volume.
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
EnableEffect takes effect immediately when you call it from an XAudio2 callback with an OperationSet of
Disables the effect at a given position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
Disabling an effect immediately removes it from the processing graph. Any pending audio in the effect?such as a reverb tail?is not played. Be careful disabling an effect while the voice that hosts it is running. This can result in an audible artifact if the effect significantly changes the audio's pitch or volume.
DisableEffect takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the running state of the effect at a specified position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
GetEffectState always returns the effect's actual current state. However, this may not be the state set by the most recent
Sets parameters for a given effect in the voice's effect chain.
-Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size of the pParameters array in bytes.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
The specific effect being used determines the valid size and format of the pParameters buffer. The call will fail if pParameters is invalid or if ParametersByteSize is not exactly the size that the effect expects. The client must take care to direct the SetEffectParameters call to the right effect. If this call is directed to a different effect that happens to accept the same parameter block size, the parameters will be interpreted differently. This may lead to unexpected results.
The memory pointed to by pParameters must not be freed immediately, because XAudio2 will need to refer to it later when the parameters actually are applied to the effect. This happens during the next audio processing pass if the OperationSet argument is
SetEffectParameters takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the current effect-specific parameters of a given effect in the voice's effect chain.
-Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size, in bytes, of the pParameters array.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
GetEffectParameters always returns the effect's actual current parameters. However, these may not match the parameters set by the most recent call to
Sets the voice's filter parameters.
-Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetFilterParameters will fail if the voice was not created with the
This method is usable only on source and submix voices and has no effect on mastering voices.
Note??Gets the voice's filter parameters.
-Pointer to an
GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the filter parameters on one of this voice's sends.
-
Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetOutputFilterParameters will fail if the send was not created with the
Returns the filter parameters from one of this voice's sends.
-
Pointer to an
GetOutputFilterParameters will fail if the send was not created with the
Sets the overall volume level for the voice.
-Overall volume level to use. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the overall volume level for the voice.
-Overall volume level to use. See Remarks for more information on volume levels.
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the volume levels for the voice, per channel.
-Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have Channels elements. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetChannelVolumes controls a voice's per-channel output levels and is applied just after the voice's final SRC and before its sends.
This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns the volume levels for the voice, per channel.
-Confirms the channel count of the voice.
Returns the current volume level of each channel in the voice. The array must have at least Channels elements. See Remarks for more information on volume levels.
These settings are applied after the effect chain is applied. This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume of 1 means there is no attenuation or gain, 0 means silence, and negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
Note??GetChannelVolumes always returns the volume levels most recently set bySets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
-Pointer to a destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels ? DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is specified in the form pLevelMatrix[SourceChannels ? D + S].
For example, when rendering two-channel stereo input into 5.1 output that is weighted toward the front channels?but is absent from the center and low-frequency channels?the matrix might have the values shown in the following table.
Output | Left Input [Array Index] | Right Input [Array Index] |
---|---|---|
Left | 1.0 [0] | 0.0 [1] |
Right | 0.0 [2] | 1.0 [3] |
Front Center | 0.0 [4] | 0.0 [5] |
LFE | 0.0 [6] | 0.0 [7] |
Rear Left | 0.8 [8] | 0.0 [9] |
Rear Right | 0.0 [10] | 0.8 [11] |
?
Note??The left and right input are fully mapped to the output left and right channels; 80 percent of the left and right input is mapped to the rear left and right channels. ?See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
This method is valid only for source and submix voices, because mastering voices write directly to the device with no matrix mixing.
Volume levels are expressed as floating-point amplitude multipliers between -
The X3DAudio function X3DAudioCalculate can produce an output matrix for use with SetOutputMatrix based on a sound's position and a listener's position.
Note??Gets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
-Pointer specifying the destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels * DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is returned in the form pLevelMatrix[DestinationChannels ? S + D]. See Remarks for more information on volume levels.
This method applies only to source and submix voices, because mastering voices write directly to the device with no matrix mixing. Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume level of 1 means there is no attenuation or gain and 0 means silence. Negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
See
Destroys the voice. If necessary, stops the voice and removes it from the XAudio2 graph.
-If any other voice is currently sending audio to this voice, the method fails.
DestroyVoice waits for the audio processing thread to be idle, so it can take a little while (typically no more than a couple of milliseconds). This is necessary to guarantee that the voice will no longer make any callbacks or read any audio data, so the application can safely free up these resources as soon as the call returns.
To avoid title thread interruptions from a blocking DestroyVoice call, the application can destroy voices on a separate non-critical thread, or the application can use voice pooling strategies to reuse voices rather than destroying them. Note that voices can only be reused with audio that has the same data format and the same number of channels the voice was created with. A voice can play audio data with different sample rates than that of the voice by calling
It is invalid to call DestroyVoice from within a callback (that is,
Returns information about the creation flags, input channels, and sample rate of a voice.
-The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods through an interface reference provided by the client in the
See the XAudio2 Callbacks topic for restrictions on callback implementation.
This is the only XAudio2 interface that is derived from the COM
The DirectX SDK versions of XAUDIO2 included three member functions that are not present in the Windows 8 version: GetDeviceCount, GetDeviceDetails, and Initialize. These enumeration methods are no longer provided and standard Windows Audio APIs should be used for device enumeration instead.
-Returns current resource usage details, such as available memory or CPU usage.
-For specific information on the statistics returned by GetPerformanceData, see the
Adds an
Returns
This method can be called multiple times, allowing different components or layers of the same application to manage their own engine callback implementations separately.
It is invalid to call RegisterForCallbacks from within a callback (that is,
Removes an
It is invalid to call UnregisterForCallbacks from within a callback (that is,
Creates and configures a source voice.
-If successful, returns a reference to the new
Pointer to a one of the structures in the table below. This structure contains the expected format for all audio buffers submitted to the source voice. XAudio2 supports PCM and ADPCM voice types.
Format tag | Wave format structure | Size (in bytes) |
---|---|---|
PCMWAVEFORMAT | 16 | |
-or- | | 18 |
PCMWAVEFORMAT | 18 | |
ADPCMWAVEFORMAT | 50 | |
| 40 |
?
XAudio2 supports the following PCM formats.
The number of channels in a source voice must be less than or equal to
Flags that specify the behavior of the source voice. A flag can be 0 or a combination of one or more of the following:
Value | Description |
---|---|
No pitch control is available on the voice.? | |
No sample rate conversion is available on the voice. The voice's outputs must have the same sample rate.Note??The | |
The filter effect should be available on this voice.? |
?
Note??The XAUDIO2_VOICE_MUSIC flag is not supported on Windows. ?Highest allowable frequency ratio that can be set on this voice. The value for this argument must be between
If MaxFrequencyRatio is less than 1.0, the voice will use that ratio immediately after being created (rather than the default of 1.0).
Xbox 360 |
---|
For XMA voices, there is one more restriction on the MaxFrequencyRatio argument and the voice's sample rate. The product of these two numbers cannot exceed XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MONO for one-channel voices or XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MULTICHANNEL for voices with any other number of channels. If the value specified for MaxFrequencyRatio is too high for the specified format, the call to CreateSourceVoice fails and produces a debug message. |
?
Note??You can use the lowest possible MaxFrequencyRatio value to reduce XAudio2's memory usage. ?Pointer to a client-provided callback interface,
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2-specific error codes.
Source voices read audio data from the client. They process the data and send it to the XAudio2 processing graph.
A source voice includes a variable-rate sample rate conversion, to convert data from the source format sample rate to the output rate required for the voice send list. If you use a
You cannot create any source or submix voices until a mastering voice exists, and you cannot destory a mastering voice if any source or submix voices still exist.
Source voices are always processed before any submix or mastering voices. This means that you do not need a ProcessingStage parameter to control the processing order.
When first created, source voices are in the stopped state.
XAudio2 uses an internal memory pooler for voices with the same format. This means memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSourceVoice from within a callback (that is,
The
Creates and configures a submix voice.
-On success, returns a reference to the new
Number of channels in the input audio data of the submix voice. InputChannels must be less than or equal to
Sample rate of the input audio data of submix voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
Flags that specify the behavior of the submix voice. It can be 0 or the following:
Value | Description |
---|---|
The filter effect should be available on this voice. |
?
An arbitrary number that specifies when this voice is processed with respect to other submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices that include a smaller ProcessingStage value and before all other voices that include a larger ProcessingStage value. Voices that include the same ProcessingStage value are processed in any order. A submix voice cannot send to another submix voice with a lower or equal ProcessingStage value. This prevents audio being lost due to a submix cycle.
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Submix voices receive the output of one or more source or submix voices. They process the output, and then send it to another submix voice or to a mastering voice.
A submix voice performs a sample rate conversion from the input sample rate to the input rate of its output voices in pSendList. If you specify multiple voice sends, they must all have the input same sample rate.
You cannot create any source or submix voices until a mastering voice exists, and you cannot destroy a mastering voice if any source or submix voices still exist.
When first created, submix voices are in the started state.
XAudio2 uses an internal memory pooler for voices with the same format. This means that memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSubmixVoice from within a callback (that is,
The
Creates and configures a mastering voice.
- If successful, returns a reference to the new
Number of channels the mastering voice expects in its input audio. InputChannels must be less than or equal to
You can set InputChannels to
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
You can set InputSampleRate to
Windows XP defaults to 44100.
Windows Vista and Windows 7 default to the setting specified in the Sound Control Panel. The default for this setting is 44100 (or 48000 if required by the driver). Flags
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio. Specifying the default value of
Pointer to an
The audio stream category to use for this mastering voice.
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Mastering voices receive the output of one or more source or submix voices. They process the data, and send it to the audio output device.
Typically, you should create a mastering voice with an input sample rate that will be used by the majority of the title's audio content. The mastering voice performs a sample rate conversion from this input sample rate to the actual device output rate.
You cannot create a source or submix voices until a mastering voice exists. You cannot destroy a mastering voice if any source or submix voices still exist.
Mastering voices are always processed after all source and submix voices. This means that you need not specify a ProcessingStage parameter to control the processing order.
XAudio2 only allows one mastering voice to exist at once. If you attempt to create more than one voice,
When first created, mastering voices are in the started state.
It is invalid to call CreateMasteringVoice from within a callback (that is,
The
Note that the DirectX SDK XAUDIO2 version of CreateMasteringVoice took a DeviceIndex argument instead of a szDeviceId and a StreamCategory argument. This reflects the changes needed for the standard Windows device enumeration model.
-Starts the audio processing thread.
-Returns
After StartEngine is called, all started voices begin to consume audio. All enabled effects start running, and the resulting audio is sent to any connected output devices. When XAudio2 is first initialized, the engine is already in the started state.
It is invalid to call StartEngine from within a callback (that is,
Stops the audio processing thread.
-When StopEngine is called, all output is stopped immediately. However, the audio graph is left untouched, preserving effect parameters, effect histories (for example, the data stored by a reverb effect in order to emit echoes of a previous sound), voice states, pending source buffers, cursor positions, and so forth. When the engine is restarted, the resulting audio output will be identical?apart from a period of silence?to the output that would have been produced if the engine had never been stopped.
It is invalid to call StopEngine from within a callback (that is,
Atomically applies a set of operations that are tagged with a given identifier.
-Identifier of the set of operations to be applied. To commit all pending operations, pass
Returns
CommitChanges does nothing if no operations are tagged with the given identifier.
See the XAudio2 Operation Sets overview about working with CommitChanges and XAudio2 interface methods that may be deferred. -
-Returns current resource usage details, such as available memory or CPU usage.
-On success, reference to an
For specific information on the statistics returned by GetPerformanceData, see the
Changes global debug logging options for XAudio2.
-Pointer to a
This parameter is reserved and must be
SetDebugConfiguration sets the debug configuration for the given instance of XAudio2 engine. See
Used with
When streaming an xWMA file a few packets at a time,
In addition, when streaming an xWMA file a few packets at a time, the application should subtract pDecodedPacketCumulativeBytes[PacketCount-1] of the previous packet from all the entries of the currently submitted packet.
The members of
Memory allocated to hold a
XAUDIO 2.8 in Windows 8.x does not support xWMA decoding. Use Windows Media Foundation APIs to perform the decoding from WMA to PCM instead. This functionality is available in the DirectX SDK versions of XAUDIO and in XAUDIO 2.9 in Windows?10.
-Contains the new global debug configuration for XAudio2. Used with the SetDebugConfiguration function.
-Debugging messages can be completely turned off by initializing
Defines an effect chain.
-Number of effects in the effect chain for the voice.
Array of
Defines filter parameters for a source voice.
-Setting
FilterParams; - FilterParams.Frequency = 1.0f; - FilterParams.OneOverQ = 1.0f; - FilterParams.Type = LowPassFilter; -
The following formulas show the relationship between the members of
Yl( n ) = F1 yb( n ) + yl( n - 1 ) - Yb( n ) = F1 yh( n ) + yb( n - 1 ) - Yh( n ) = x( n ) - yl( n ) - OneOverQ(yb( n - 1 ) - Yn( n ) = Yl(n) + Yh(n)
Where:
Yl = lowpass output - Yb = bandpass output - Yh = highpass output - Yn = notch output - F1 =-.Frequency - OneOverQ = .OneOverQ
The
Filter radian frequency calculated as (2 * sin(pi * (desired filter cutoff frequency) / sampleRate)). The frequency must be greater than or equal to 0 and less than or equal to
Reciprocal of Q factor. Controls how quickly frequencies beyond Frequency are dampened. Larger values result in quicker dampening while smaller values cause dampening to occur more gradually. Must be greater than 0 and less than or equal to
Contains performance information.
-CPU cycles are recorded using . Use to convert these values.
-CPU cycles spent on audio processing since the last call to the
Total CPU cycles elapsed since the last call.
Note??This only counts cycles on the CPU on which XAudio2 is running. ?Fewest CPU cycles spent on processing any single audio quantum since the last call.
Most CPU cycles spent on processing any single audio quantum since the last call.
Total memory currently in use.
Minimum delay that occurs between the time a sample is read from a source buffer and the time it reaches the speakers.
Windows |
---|
The delay reported is a variable value equal to the rough distance between the last sample submitted to the driver by XAudio2 and the sample currently playing. The following factors can affect the delay: playing multichannel audio on a hardware-accelerated device; the type of audio device (WavePci, WaveCyclic, or WaveRT); and, to a lesser extent, audio hardware implementation. - |
?
Xbox 360 |
---|
The delay reported is a fixed value, which is normally 1,024 samples (21.333 ms at 48 kHz). If XOverrideSpeakerConfig has been called using the XAUDIOSPEAKERCONFIG_LOW_LATENCY flag, the delay reported is 512 samples (10.667 ms at 48 kHz). - |
?
Total audio dropouts since the engine started.
Number of source voices currently playing.
Total number of source voices currently in existence.
Number of submix voices currently playing.
Number of resampler xAPOs currently active.
Number of matrix mix xAPOs currently active.
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
Number of source voices decoding XMA data. |
?
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
A voice can use more than one XMA stream. |
?
Contains information about the creation flags, input channels, and sample rate of a voice.
-Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
-Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
Defines a destination voice that is the target of a send from another voice and specifies whether a filter should be used.
-Indicates whether a filter should be used on data sent to the voice pointed to by pOutputVoice. Flags can be 0 or
A reference to an
Defines a set of voices to receive data from a single output voice.
-If pSends is not
Setting SendCount to 0 is useful for certain effects such as volume meters or file writers that don't generate any audio output to pass on to another voice.
If needed, a voice will perform a single sample rate conversion, from the voice's input sample rate to the input sample rate of the voice's output voices. Because only one sample rate conversion will be performed, all the voice's output voices must have the same input sample rate. If the input sample rates of the voice and its output voices are the same, no sample rate conversion is performed. -
-Number of voices to receive the output of the voice. An OutputCount value of 0 indicates the voice should not send output to any voices.
Array of
Returns the voice's current state and cursor position data.
-For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Pointer to a buffer context provided in the
Number of audio buffers currently queued on the voice, including the one that is processed currently.
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the
Creates a new XAudio2 object and returns a reference to its
Returns
The DirectX SDK versions of XAUDIO2 supported a flag
Note??No versions of the DirectX SDK contain the xaudio2.lib import library. DirectX SDK versions use COM to create a new XAudio2 object.
-Creates a new reverb audio processing object (APO), and returns a reference to it.
-Contains a reference to the reverb APO that is created.
If this function succeeds, it returns
XAudio2CreateReverb creates an effect performing Princeton Digital Reverb. The XAPO effect library (XAPOFX) includes an alternate reverb effect. Use CreateFX to create this alternate effect.
The reverb APO supports has the following restrictions:
For information about creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateReverb calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateReverb. A typical calling pattern on Windows would be as follows: #ifndef _XBOX - CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioReverb class
class __declspec(uuid("C2633B16-471B-4498-B8C5-4F0959E2EC09")) AudioReverb;
-
XAudio2CreateReverb returns this object as a reference to a reference to
The reverb uses the
Note??XAudio2CreateReverb is an inline function in xaudio2fx.h that calls CreateAudioReverb:
XAUDIO2FX_STDAPI CreateAudioReverb(_Outptr_ ** ppApo);
- __inline XAudio2CreateReverb(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
- { return CreateAudioReverb(ppApo);
- }
-
- Creates a new volume meter audio processing object (APO) and returns a reference to it.
-Contains the created volume meter APO.
If this function succeeds, it returns
For information on creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateVolumeMeter calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateVolumeMeter. A typical calling pattern on Windows would be as follows: #ifndef _XBOX - CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioVolumeMeter class
class __declspec(uuid("4FC3B166-972A-40CF-BC37-7DB03DB2FBA3")) AudioVolumeMeter;
-
XAudio2CreateVolumeMeter returns this object as a reference to a reference to
The volume meter uses the
Note??XAudio2CreateVolumeMeter is an inline function in xaudio2fx.h that calls CreateAudioVolumeMeter:
XAUDIO2FX_STDAPI CreateAudioVolumeMeter(_Outptr_ ** ppApo);
- __inline XAudio2CreateVolumeMeter(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
- { return CreateAudioVolumeMeter(ppApo);
- }
-
- Specifies directionality for a single-channel non-LFE emitter by scaling DSP behavior with respect to the emitter's orientation.
-For a detailed explanation of sound cones see Sound Cones.
-Inner cone angle in radians. This value must be within 0.0f to X3DAUDIO_2PI.
Outer cone angle in radians. This value must be within InnerAngle to X3DAUDIO_2PI.
Volume scaler on/within inner cone. This value must be within 0.0f to 2.0f.
Volume scaler on/beyond outer cone. This value must be within 0.0f to 2.0f.
LPF direct-path or reverb-path coefficient scaler on/within inner cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
LPF direct-path or reverb-path coefficient scaler on or beyond outer cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
Reverb send level scaler on or within inner cone. This must be within 0.0f to 2.0f.
Reverb send level scaler on/beyond outer cone. This must be within 0.0f to 2.0f. -
Defines a DSP setting at a given normalized distance.
-Normalized distance. This must be within 0.0f to 1.0f.
DSP control setting.
Defines an explicit piecewise curve made up of linear segments, directly defining DSP behavior with respect to normalized distance.
-
Number of distance curve points. There must be two or more points since all curves must have at least two endpoints defining values at 0.0f and 1.0f normalized distance, respectively.
Receives the results from a call to X3DAudioCalculate.
-The following members must be initialized before passing this structure to the X3DAudioCalculate function:
The following members are returned by passing this structure to the X3DAudioCalculate function:
Defines a single-point or multiple-point 3D audio source that is used with an arbitrary number of sound channels.
-The parameter type
X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
For user-defined distance curves, the distance field of the first point must be 0.0f and the distance field of the last point must be 1.0f.
If an emitter moves beyond a distance of (CurveDistanceScaler ? 1.0f), the last point on the curve is used to compute the volume output level. The last point is determined by the following: -
-.pPoints[PointCount-1].DSPSetting)
Pointer to a sound cone. Used only with single-channel emitters for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the front direction. This value must be orthonormal with OrientTop. OrientFront must be normalized when used. For single-channel emitters without cones OrientFront is only used for emitter angle calculations. For multi channel emitters or single-channel with cones OrientFront is used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the top direction. This value must be orthonormal with OrientFront. OrientTop is only used with multi-channel emitters for matrix calculations.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units/second. This value is used only for doppler calculations. It does not affect Position. -
Value to be used for the inner radius calculations. If InnerRadius is 0, then no inner radius is used, but InnerRadiusAngle may still be used. This value must be between 0.0f and MAX_FLT. -
Value to be used for the inner radius angle calculations. This value must be between 0.0f and X3DAUDIO_PI/4.0.
Number of emitters defined by the
Distance from Position that channels will be placed if ChannelCount is greater than 1. ChannelRadius is only used with multi-channel emitters for matrix calculations. Must be greater than or equal to 0.0f.
Table of channel positions, expressed as an azimuth in radians along the channel radius with respect to the front orientation vector in the plane orthogonal to the top orientation vector. An azimuth of X3DAUDIO_2PI specifies a channel is a low-frequency effects (LFE) channel. LFE channels are positioned at the emitter base and are calculated with respect to pLFECurve only, never pVolumeCurve. pChannelAzimuths must have at least ChannelCount elements, but can be
Volume-level distance curve, which is used only for matrix calculations.
LFE roll-off distance curve, or
Low-pass filter (LPF) direct-path coefficient distance curve, or
LPF reverb-path coefficient distance curve, or
Reverb send level distance curve, or
Curve distance scaler that is used to scale normalized distance curves to user-defined world units, and/or to exaggerate their effect. This does not affect any other calculations. The value must be within the range FLT_MIN to FLT_MAX. CurveDistanceScaler is only used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Doppler shift scaler that is used to exaggerate Doppler shift effect. DopplerScaler is only used for Doppler calculations and does not affect any other calculations. The value must be within the range 0.0f to FLT_MAX.
Defines a point of 3D audio reception.
-X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
The parameter type
A listener's front and top vectors must be orthonormal. To be considered orthonormal, a pair of vectors must have a magnitude of 1 +- 1x10-5 and a dot product of 0 +- 1x10-5.
-Orientation of front direction. When pCone is
Orientation of top direction, used only for matrix and delay calculations. This value must be orthonormal with OrientFront when used.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units per second, used only for doppler calculations. This value does not affect Position.
Pointer to an
Calculates DSP settings with respect to 3D parameters.
-3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
-Sets all global 3D audio constants.
-Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
-Calculates DSP settings with respect to 3D parameters.
-3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
-Sets all global 3D audio constants.
-Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
-Describes the contents of a stream buffer.
-This metadata can be used to implement optimizations that require knowledge of a stream buffer's contents. For example, XAPOs that always produce silent output from silent input can check the flag on the input stream buffer to determine if any signal processing is necessary. If silent, the XAPO can simply set the flag on the output stream buffer to silent and return, thus averting the work of processing silent data.
Likewise, XAPOs that receive valid input data, but generate silence (for any reason), may set the output stream buffer's flag accordingly, rather than writing silent samples to the buffer.
These flags represent what should be assumed is in the respective buffer. The flags may not reflect what is actually stored in memory. For example, the
Stream buffer contains only silent samples.
Stream buffer contains audio data to be processed.
Initialization parameters for use with the FXECHO XAPOFX.
-Use of this structure is optional. The default MaxDelay is
Parameters for use with the FXECHO XAPOFX.
-Echo only supports FLOAT32 audio formats.
-Parameters for use with the FXEQ XAPO.
-Each band ranges from FrequencyCenterN - (BandwidthN / 2) to FrequencyCenterN + (BandwidthN / 2).
-Center frequency in Hz for band 0. Must be between
The boost or decrease to frequencies in band 0. Must be between
Width of band 0. Must be between
Center frequency in Hz for band 1. Must be between
The boost or decrease to frequencies in band 1. Must be between
Width of band 1. Must be between
Center frequency in Hz for band 2. Must be between
The boost or decrease to frequencies in band 2. Must be between
Width of band 2. Must be between
Center frequency in Hz for band 3. Must be between
The boost or decrease to frequencies in band 3. Must be between
Width of band 3. Must be between
Parameters for use with the FXMasteringLimiter XAPO.
-Parameters for use with the FXReverb XAPO.
-Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface.Value must be between
Size of the room. Value must be between
The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
-The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
-Returns the registration properties of an XAPO.
- Receives a reference to a
Returns
Queries if a specific input format is supported for a given output format.
-Output format.
Input format to check for being supported.
If not
Returns
The
Queries if a specific output format is supported for a given input format.
-Input format.
Output format to check for being supported.
If not
Returns
The
Performs any effect-specific initialization.
- Effect-specific initialization parameters, may be
Size of pData in bytes, may be 0 if pData is
Returns
The contents of pData are defined by a given XAPO. Immutable parameters (constant for the lifetime of the XAPO) should be set in this method. Once initialized, an XAPO cannot be initialized again. An XAPO should be initialized before passing it to XAudio2 as part of an effect chain.
Note??XAudio2 does not call this method, it should be called by the client before passing the XAPO to XAudio2.? -Resets variables dependent on frame history.
-Constant and locked parameters such as the input and output formats remain unchanged. Variables set by
For example, an effect with delay should zero out its delay line during this method, but should not reallocate anything as the XAPO remains locked with a constant input and output configuration.
XAudio2 only calls this method if the XAPO is locked.
This method is called from the realtime thread and should not block. -
-Called by XAudio2 to lock the input and output configurations of an XAPO allowing it to do any final initialization before Process is called on the realtime thread.
-Returns
Once locked, the input and output configuration and any other locked parameters remain constant until UnLockForProcess is called. After an XAPO is locked, further calls to LockForProcess have no effect until the UnLockForProcess function is called.
An XAPO indicates what specific formats it supports through its implementation of the IsInputFormatSupported and IsOutputFormatSupported methods. An XAPO should assert the input and output configurations are supported and that any required effect-specific initialization is complete. The IsInputFormatSupported, IsOutputFormatSupported, and Initialize methods should be used as necessary before calling this method.
Because Process is a nonblocking method, all internal memory buffers required for Process should be allocated in LockForProcess.
Process is never called before LockForProcess returns successfully.
LockForProcess is called directly by XAudio2 and should not be called by the client code.
-Deallocates variables that were allocated with the LockForProcess method.
-Unlocking an XAPO instance allows it to be reused with different input and output formats.
-Runs the XAPO's digital signal processing (DSP) code on the given input and output buffers.
-Number of elements in pInputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ? Input array of
Number of elements in pOutputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ?Output array of
TRUE to process normally;
Implementations of this function should not block, as the function is called from the realtime audio processing thread.
All code that could cause a delay, such as format validation and memory allocation, should be put in the
For in-place processing, the pInputProcessParameters parameter will not necessarily be the same as pOutputProcessParameters. Rather, their pBuffer members will point to the same memory.
Multiple input and output buffers may be used with in-place XAPOs, though the input buffer count must equal the output buffer count. For in-place processing when multiple input and output buffers are used, the XAPO may assume the number of input buffers equals the number of output buffers.
In addition to writing to the output buffer, as appropriate, an XAPO is responsible for setting the output stream's buffer flags and valid frame count.
When IsEnabled is
When writing a Process method, it is important to note XAudio2 audio data is interleaved, which means data from each channel is adjacent for a particular sample number. For example, if there was a 4-channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, and so on. -
-Returns the number of input frames required to generate the given number of output frames.
-The number of output frames desired.
Returns the number of input frames required.
XAudio2 calls this method to determine what size input buffer an XAPO requires to generate the given number of output frames. This method only needs to be called once while an XAPO is locked. CalcInputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
-Returns the number of output frames that will be generated from a given number of input frames.
-The number of input frames.
Returns the number of output frames that will be produced.
XAudio2 calls this method to determine how large of an output buffer an XAPO will require for a certain number of input frames. CalcOutputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
-An optional interface that allows an XAPO to use effect-specific parameters.
-An optional interface that allows an XAPO to use effect-specific parameters.
-Sets effect-specific parameters.
-Effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
SetParameters can only be called on the real-time audio processing thread; no synchronization between SetParameters and the
Gets the current values for any effect-specific parameters.
-Receives an effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
Unlike SetParameters, XAudio2 does not call this method on the realtime audio processing thread. Thus, the XAPO must protect variables shared with
XAudio2 calls this method from the
This method may block and should never be called from the realtime audio processing thread instead get the current parameters from CXAPOParametersBase::BeginProcess.
-Defines stream buffer parameters that may change from one call to the next. Used with the Process method.
-Although the format and maximum size values of a particular stream buffer are constant, as defined by the
Defines stream buffer parameters that remain constant while an XAPO is locked. Used with the
The byte size of the respective stream buffer must be at least MaxFrameCount ? (pFormat->nBlockAlign) bytes.
-Describes general characteristics of an XAPO. Used with
Describes the current state of the Xbox 360 Controller.
-This structure is used by the
The specific mapping of button to game function varies depending on the game type.
The constant XINPUT_GAMEPAD_TRIGGER_THRESHOLD may be used as the value which bLeftTrigger and bRightTrigger must be greater than to register as pressed. This is optional, but often desirable. Xbox 360 Controller buttons do not manifest crosstalk. -
-Bitmask of the device digital buttons, as follows. A set bit indicates that the corresponding button is pressed.
Device button | Bitmask |
---|---|
0x0001 | |
0x0002 | |
0x0004 | |
0x0008 | |
0x0010 | |
0x0020 | |
0x0040 | |
0x0080 | |
0x0100 | |
0x0200 | |
0x1000 | |
0x2000 | |
0x4000 | |
0x8000 |
?
Bits that are set but not defined above are reserved, and their state is undefined.
The current value of the left trigger analog control. The value is between 0 and 255.
The current value of the right trigger analog control. The value is between 0 and 255.
Left thumbstick x-axis value. Each of the thumbstick axis members is a signed value between -32768 and 32767 describing the position of the thumbstick. A value of 0 is centered. Negative values signify down or to the left. Positive values signify up or to the right. The constants
Left thumbstick y-axis value. The value is between -32768 and 32767.
Right thumbstick x-axis value. The value is between -32768 and 32767.
Right thumbstick y-axis value. The value is between -32768 and 32767.
Retrieves the battery type and charge status of a wireless controller.
-Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1.
Specifies which device associated with this user index should be queried. Must be
Contains information on battery type and charge state.
-The type of battery. BatteryType will be one of the following values.
Value | Description |
---|---|
The device is not connected.? | |
The device is a wired device and does not have a battery.? | |
The device has an alkaline battery.? | |
The device has a nickel metal hydride battery.? | |
The device has an unknown battery type.? |
?
The charge state of the battery. This value is only valid for wireless devices with a known battery type. BatteryLevel will be one of the following values.
Value |
---|
?
A table of controller subtypes available in XInput.
-Describes the capabilities of a connected controller. The
The SubType member indicates the specific subtype of controller present. Games may detect the controller subtype and tune their handling of controller input or output based on subtypes that are well suited to their game genre. For example, a car racing game might check for the presence of a wheel controller to provide finer control of the car being driven. However, titles must not disable or ignore a device based on its subtype. Subtypes not recognized by the game or for which the game is not specifically tuned should be treated as a standard Xbox 360 Controller (
Older XUSB Windows drivers report incomplete capabilities information, particularly for wireless devices. The latest XUSB Windows driver provides full support for wired and wireless devices, and more complete and accurate capabilties flags.
-Retrieves a gamepad input event.
-Wireless controllers are not considered active upon system startup, and calls to any of the XInput functions before a wireless controller is made active return
[in] Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1, or
[in] Reserved
[out] Pointer to an
Retrieves the current state of the specified controller.
-Index of the user's controller. Can be a value from 0 to 3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in Winerror.h. The function does not use SetLastError to set the calling thread's last-error code.
When
Sends data to a connected controller. This function is used to activate the vibration function of a controller.
-Index of the user's controller. Can be a value from 0 to 3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in WinError.h. The function does not use SetLastError to set the calling thread's last-error code.
Retrieves the capabilities and features of a connected controller.
-Index of the user's controller. Can be a value in the range 0?3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Input flags that identify the controller type. If this value is 0, then the capabilities of all controllers connected to the system are returned. Currently, only one value is supported:
Value | Description |
---|---|
Limit query to devices of Xbox 360 Controller type. |
?
Any value of dwflags other than the above or 0 is illegal and will result in an error break when debugging.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in WinError.h. The function does not use SetLastError to set the calling thread's last-error code.
Sets the reporting state of XInput.
-If enable is
This function is meant to be called when an application gains or loses focus (such as via WM_ACTIVATEAPP). Using this function, you will not have to change the XInput query loop in your application as neutral data will always be reported if XInput is disabled. -
In a controller that supports vibration effects:
Retrieves the sound rendering and sound capture audio device IDs that are associated with the headset connected to the specified controller.
-Index of the gamer associated with the device.
Windows Core Audio device ID string for render (speakers).
Size, in wide-chars, of the render device ID string buffer.
Windows Core Audio device ID string for capture (microphone).
Size, in wide-chars, of capture device ID string buffer.
If the function successfully retrieves the device IDs for render and capture, the return code is
If there is no headset connected to the controller, the function will also retrieve
If the controller port device is not physically connected, the function will return
If the function fails, it will return a valid Win32 error code.
Callers must allocate the memory for the buffers passed to
Retrieves the battery type and charge status of a wireless controller.
-Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1.
Specifies which device associated with this user index should be queried. Must be
Pointer to an
If the function succeeds, the return value is
Retrieves a gamepad input event.
-[in] Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1, or
[in] Reserved
[out] Pointer to an
If the function succeeds, the return value is
If no new keys have been pressed, the return value is
If the controller is not connected or the user has not activated it, the return value is
If the function fails, the return value is an error code defined in Winerror.h. The function does not use SetLastError to set the calling thread's last-error code.
Wireless controllers are not considered active upon system startup, and calls to any of the XInput functions before a wireless controller is made active return
Contains information on battery type and charge state.
-The type of battery. BatteryType will be one of the following values.
Value | Description |
---|---|
The device is not connected.? | |
The device is a wired device and does not have a battery.? | |
The device has an alkaline battery.? | |
The device has a nickel metal hydride battery.? | |
The device has an unknown battery type.? |
?
The charge state of the battery. This value is only valid for wireless devices with a known battery type. BatteryLevel will be one of the following values.
Value |
---|
?
Describes the capabilities of a connected controller. The
The SubType member indicates the specific subtype of controller present. Games may detect the controller subtype and tune their handling of controller input or output based on subtypes that are well suited to their game genre. For example, a car racing game might check for the presence of a wheel controller to provide finer control of the car being driven. However, titles must not disable or ignore a device based on its subtype. Subtypes not recognized by the game or for which the game is not specifically tuned should be treated as a standard Xbox 360 Controller (
Older XUSB Windows drivers report incomplete capabilities information, particularly for wireless devices. The latest XUSB Windows driver provides full support for wired and wireless devices, and more complete and accurate capabilties flags.
-Specifies keystroke data returned by
Future devices may return HID codes and virtual key values that are not supported on current devices, and are currently undefined. Applications should ignore these unexpected values.
A virtual-key code is a byte value that represents a particular physical key on the keyboard, not the character or characters (possibly none) that the key can be mapped to based on keyboard state. The keyboard state at the time a virtual key is pressed modifies the character reported. For example, VK_4 might represent a "4" or a "$", depending on the state of the SHIFT key.
A reported keyboard event includes the virtual key that caused the event, whether the key was pressed or released (or is repeating), and the state of the keyboard at the time of the event. The keyboard state includes information about whether any CTRL, ALT, or SHIFT keys are down.
If the keyboard event represents an Unicode character (for example, pressing the "A" key), the Unicode member will contain that character. Otherwise, Unicode will contain the value zero.
The valid virtual-key (VK_xxx) codes are defined in XInput.h. In addition to codes that indicate key presses, the following codes indicate controller input.
Value | Description |
---|---|
A button? | |
B button? | |
X button? | |
Y button? | |
Right shoulder button? | |
Left shoulder button? | |
Left trigger? | |
Right trigger? | |
Directional pad up? | |
Directional pad down? | |
Directional pad left? | |
Directional pad right? | |
START button? | |
BACK button? | |
Left thumbstick click? | |
Right thumbstick click? | |
Left thumbstick up? | |
Left thumbstick down? | |
Left thumbstick right? | |
Left thumbstick left? | |
Left thumbstick up and left? | |
Left thumbstick up and right? | |
Left thumbstick down and right? | |
Left thumbstick down and left? | |
Right thumbstick up? | |
Right thumbstick down? | |
Right thumbstick right? | |
Right thumbstick left? | |
Right thumbstick up and left? | |
Right thumbstick up and right? | |
Right thumbstick down and right? | |
Right thumbstick down and left? |
?
-Represents the state of a controller.
-The dwPacketNumber member is incremented only if the status of the controller has changed since the controller was last polled.
-State packet number. The packet number indicates whether there have been any changes in the state of the controller. If the dwPacketNumber member is the same in sequentially returned
Specifies motor speed levels for the vibration function of a controller.
-The left motor is the low-frequency rumble motor. The right motor is the high-frequency rumble motor. The two motors are not the same, and they create different vibration effects.
-Speed of the left motor. Valid values are in the range 0 to 65,535. Zero signifies no motor use; 65,535 signifies 100 percent motor use.
Speed of the right motor. Valid values are in the range 0 to 65,535. Zero signifies no motor use; 65,535 signifies 100 percent motor use.
The
-
You can specify
Typically, use
The
-
The
-
The
-
The
-
The methods in this interface present your object's data as a contiguous sequence of bytes that you can read or write. There are also methods for committing and reverting changes on streams that are open in transacted mode and methods for restricting access to a range of bytes in the stream.
Streams can remain open for long periods of time without consuming file-system resources. The IUnknown::Release method is similar to a close function on a file. Once released, the stream object is no longer valid and cannot be used.
Clients of asynchronous monikers can choose between a data-pull or data-push model for driving an asynchronous
- IMoniker::BindToStorage operation and for receiving asynchronous notifications. See
- URL Monikers for more information. The following table compares the behavior of asynchronous
-
The Seek method changes the seek reference to a new location. The new location is relative to either the beginning of the stream, the end of the stream, or the current seek reference.
-The displacement to be added to the location indicated by the dwOrigin parameter. If dwOrigin is STREAM_SEEK_SET, this is interpreted as an unsigned value rather than a signed value.
The origin for the displacement specified in dlibMove. The origin can be the beginning of the file (STREAM_SEEK_SET), the current seek reference (STREAM_SEEK_CUR), or the end of the file (STREAM_SEEK_END). For more information about values, see the STREAM_SEEK enumeration.
A reference to the location where this method writes the value of the new seek reference from the beginning of the stream.
You can set this reference to
You can also use this method to obtain the current value of the seek reference by calling this method with the dwOrigin parameter set to STREAM_SEEK_CUR and the dlibMove parameter set to 0 so that the seek reference is not changed. The current seek reference is returned in the plibNewPosition parameter.
-The SetSize method changes the size of the stream object.
-Specifies the new size, in bytes, of the stream.
This method can return one of these values.
The size of the stream object was successfully changed.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information, see IFillLockBytes and Asynchronous Storage.
The stream size is not changed because there is no space left on the storage device.
The value of the libNewSize parameter is not supported by the implementation. Not all streams support greater than 2?? bytes. If a stream does not support more than 2?? bytes, the high DWORD data type of libNewSize must be zero. If it is nonzero, the implementation may return STG_E_INVALIDFUNCTION. In general, COM-based implementations of the
The object has been invalidated by a revert operation above it in the transaction tree.
If the libNewSize parameter is smaller than the current stream, the stream is truncated to the indicated size.
The seek reference is not affected by the change in stream size.
Calling
The CopyTo method copies a specified number of bytes from the current seek reference in the stream to the current seek reference in another stream.
-A reference to the destination stream. The stream pointed to by pstm can be a new stream or a clone of the source stream.
The number of bytes to copy from the source stream.
A reference to the location where this method writes the actual number of bytes written to the destination. You can set this reference to
A reference to the location where this method writes the actual number of bytes read from the source. You can set this reference to
The CopyTo method copies the specified bytes from one stream to another. It can also be used to copy a stream to itself. The seek reference in each stream instance is adjusted for the number of bytes read or written. This method is equivalent to reading cb bytes into memory using
-
The destination stream can be a clone of the source stream created by calling the
-
If
If
To copy the remainder of the source from the current seek reference, specify the maximum large integer value for the cb parameter. If the seek reference is the beginning of the stream, this operation copies the entire stream.
-The Commit method ensures that any changes made to a stream object open in transacted mode are reflected in the parent storage. If the stream object is open in direct mode,
Controls how the changes for the stream object are committed. See the
This method can return one of these values.
Changes to the stream object were successfully committed to the parent level.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information see IFillLockBytes and Asynchronous Storage.
The commit operation failed due to lack of space on the storage device.
The object has been invalidated by a revert operation above it in the transaction tree.
The Commit method ensures that changes to a stream object opened in transacted mode are reflected in the parent storage. Changes that have been made to the stream since it was opened or last committed are reflected to the parent storage object. If the parent is opened in transacted mode, the parent may revert at a later time, rolling back the changes to this stream object. The compound file implementation does not support the opening of streams in transacted mode, so this method has very little effect other than to flush memory buffers. For more information, see
-
If the stream is open in direct mode, this method ensures that any memory buffers have been flushed out to the underlying storage object. This is much like a flush in traditional file systems.
The
The Revert method discards all changes that have been made to a transacted stream since the last
-
This method can return one of these values.
The stream was successfully reverted to its previous version.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information see IFillLockBytes and Asynchronous Storage.
The Revert method discards changes made to a transacted stream since the last commit operation.
- The Stat method retrieves the
-
The Clone method creates a new stream object with its own seek reference that references the same bytes as the original stream.
-When successful, reference to the location of an
The Clone method creates a new stream object for accessing the same bytes but using a separate seek reference. The new stream object sees the same data as the source-stream object. Changes written to one object are immediately visible in the other. Range locking is shared between the stream objects.
The initial setting of the seek reference in the cloned stream instance is the same as the current setting of the seek reference in the original stream at the time of the clone operation.
- The
-
Reads a specified number of bytes from the stream object into memory starting at the current read/write location within the stream.
-[in]Points to the buffer into which the stream is read. If an error occurs, this value is
[in]Specifies the number of bytes of data to attempt to read from the stream object.
[out]Pointer to a location where this method writes the actual number of bytes read from the stream object. You can set this reference to
Writes a specified number of bytes into the stream object starting at the current read/write location within the stream.
-[in] Points to the buffer into which the stream should be written.
[in] The number of bytes of data to attempt to write into the stream.
[out] Pointer to a location where this method writes the actual number of bytes written to the stream object. The caller can set this reference to
The
-
The
-
The methods in this interface present your object's data as a contiguous sequence of bytes that you can read or write. There are also methods for committing and reverting changes on streams that are open in transacted mode and methods for restricting access to a range of bytes in the stream.
Streams can remain open for long periods of time without consuming file-system resources. The IUnknown::Release method is similar to a close function on a file. Once released, the stream object is no longer valid and cannot be used.
Clients of asynchronous monikers can choose between a data-pull or data-push model for driving an asynchronous
- IMoniker::BindToStorage operation and for receiving asynchronous notifications. See
- URL Monikers for more information. The following table compares the behavior of asynchronous
-
The
-
The
-
This interface is used to return arbitrary length data.
-An
The ID3DBlob interface is type defined in the D3DCommon.h header file as a
Blobs can be used as a data buffer, storing vertex, adjacency, and material information during mesh optimization and loading operations. Also, these objects are used to return object code and error messages in APIs that compile vertex, geometry and pixel shaders.
-Get a reference to the data.
-Get the size.
-Get a reference to the data.
-Returns a reference.
Get the size.
-The size of the data, in bytes.
Defines a shader macro.
-You can use shader macros in your shaders. The
Shader_Macros[] = { "zero", "0", null ,null }; -
The following shader or effect creation functions take an array of shader macros as an input parameter:
The macro name.
The macro definition.
Driver type options.
-The driver type is required when calling
The driver type is unknown.
A hardware driver, which implements Direct3D features in hardware. This is the primary driver that you should use in your Direct3D applications because it provides the best performance. A hardware driver uses hardware acceleration (on supported hardware) but can also use software for parts of the pipeline that are not supported in hardware. This driver type is often referred to as a hardware abstraction layer or HAL.
A reference driver, which is a software implementation that supports every Direct3D feature. A reference driver is designed for accuracy rather than speed and as a result is slow but accurate. The rasterizer portion of the driver does make use of special CPU instructions whenever it can, but it is not intended for retail applications; use it only for feature testing, demonstration of functionality, debugging, or verifying bugs in other drivers. The reference device for this driver is installed by the Windows SDK 8.0 or later and is intended only as a debug aid for development purposes. This driver may be referred to as a REF driver, a reference driver, or a reference rasterizer.
Note??When you use the REF driver in Windows Store apps, the REF driver renders correctly but doesn't display any output on the screen. To verify bugs in hardware drivers for Windows Store apps, useA
A software driver, which is a driver implemented completely in software. The software implementation is not intended for a high-performance application due to its very slow performance.
A WARP driver, which is a high-performance software rasterizer. The rasterizer supports feature levels 9_1 through level 10_1 with a high performance software implementation. For information about limitations creating a WARP device on certain feature levels, see Limitations Creating WARP and Reference Devices. For more information about using a WARP driver, see Windows Advanced Rasterization Platform (WARP) In-Depth Guide.
Note??The WARP driver that Windows?8 includes supports feature levels 9_1 through level 11_1. ? Note??The WARP driver that Windows?8.1 includes fully supports feature level 11_1, including tiled resources,Describes the set of features targeted by a Direct3D device.
-For an overview of the capabilities of each feature level, see Overview For Each Feature Level.
For information about limitations creating non-hardware-type devices on certain feature levels, see Limitations Creating WARP and Reference Devices.
-Targets features supported by feature level 9.1 including shader model 2.
Targets features supported by feature level 9.2 including shader model 2.
Targets features supported by feature level 9.3 including shader model 2.0b.
Targets features supported by Direct3D 10.0 including shader model 4.
Targets features supported by Direct3D 10.1 including shader model 4.
Targets features supported by Direct3D 11.0 including shader model 5.
Targets features supported by Direct3D 11.1 including shader model 5 and logical blend operations. This feature level requires a display driver that is at least implemented to WDDM for Windows?8 (WDDM 1.2).
Targets features supported by Direct3D 12.0 including shader model 5.
Targets features supported by Direct3D 12.1 including shader model 5.
Specifies interpolation mode, which affects how values are calculated during rasterization.
-The interpolation mode is undefined.
Don't interpolate between register values.
Interpolate linearly between register values.
Interpolate linearly between register values but centroid clamped when multisampling.
Interpolate linearly between register values but with no perspective correction.
Interpolate linearly between register values but with no perspective correction and centroid clamped when multisampling.
Interpolate linearly between register values but sample clamped when multisampling.
Interpolate linearly between register values but with no perspective correction and sample clamped when multisampling.
Values that indicate the minimum desired interpolation precision.
-For more info, see Scalar Types and Using HLSL minimum precision.
-Default minimum precision, which is 32-bit precision.
Minimum precision is min16float, which is 16-bit floating point.
Minimum precision is min10float, which is 10-bit floating point.
Reserved
Minimum precision is min16int, which is 16-bit signed integer.
Minimum precision is min16uint, which is 16-bit unsigned integer.
Minimum precision is any 16-bit value.
Minimum precision is any 10-bit value.
Values that indicate how the pipeline interprets vertex data that is bound to the input-assembler stage. These primitive topology values determine how the vertex data is rendered on screen.
-Use the
The following diagram shows the various primitive types for a geometry shader object.
-Values that identify the type of resource to be viewed as a shader resource.
-A
The type is unknown.
The resource is a buffer.
The resource is a 1D texture.
The resource is an array of 1D textures.
The resource is a 2D texture.
The resource is an array of 2D textures.
The resource is a multisampling 2D texture.
The resource is an array of multisampling 2D textures.
The resource is a 3D texture.
The resource is a cube texture.
The resource is an array of cube textures.
The resource is a raw buffer. For more info about raw viewing of buffers, see Raw Views of Buffers.
A multithread interface accesses multithread settings and can only be used if the thread-safe layer is turned on.
-This interface is obtained by querying it from the ID3D10Device Interface using IUnknown::QueryInterface.
-Enter a device's critical section.
-Entering a device's critical section prevents other threads from simultaneously calling that device's methods (if multithread protection is set to true), calling DXGI methods, and calling the methods of all resource, view, shader, state, and asynchronous interfaces.
This function should be used in multithreaded applications when there is a series of graphics commands that must happen in order. This function is typically called at the beginning of the series of graphics commands, and
Leave a device's critical section.
-This function is typically used in multithreaded applications when there is a series of graphics commands that must happen in order.
Turn multithreading on or off.
-True to turn multithreading on, false to turn it off.
True if multithreading was turned on prior to calling this method, false otherwise.
Find out if multithreading is turned on or not.
-Whether or not multithreading is turned on. True means on, false means off.
The
The IAudioClient::Initialize and IAudioClient::IsFormatSupported methods use the constants defined in the
In shared mode, the client can share the audio endpoint device with clients that run in other user-mode processes. The audio engine always supports formats for client streams that match the engine's mix format. In addition, the audio engine might support another format if the Windows audio service can insert system effects into the client stream to convert the client format to the mix format.
In exclusive mode, the Windows audio service attempts to establish a connection in which the client has exclusive access to the audio endpoint device. In this mode, the audio engine inserts no system effects into the local stream to aid in the creation of the connection point. Either the audio device can handle the specified format directly or the method fails.
For more information about shared-mode and exclusive-mode streams, see User-Mode Audio Components.
-The audio stream will run in shared mode. For more information, see Remarks.
The audio stream will run in exclusive mode. For more information, see Remarks.
The AudioSessionState enumeration defines constants that indicate the current state of an audio session.
-When a client opens a session by assigning the first stream to the session (by calling the IAudioClient::Initialize method), the initial session state is inactive. The session state changes from inactive to active when a stream in the session begins running (because the client has called the IAudioClient::Start method). The session changes from active to inactive when the client stops the last running stream in the session (by calling the IAudioClient::Stop method). The session state changes to expired when the client destroys the last stream in the session by releasing all references to the stream object.
The system volume-control program, Sndvol, displays volume controls for both active and inactive sessions. Sndvol stops displaying the volume control for a session when the session state changes to expired. For more information about Sndvol, see Audio Sessions.
The IAudioSessionControl::GetState and IAudioSessionEvents::OnStateChanged methods use the constants defined in the AudioSessionState enumeration.
For more information about session states, see Audio Sessions.
-The audio session is inactive. (It contains at least one stream, but none of the streams in the session is currently running.)
The audio session is active. (At least one of the streams in the session is running.)
The audio session has expired. (It contains no streams.)
Specifies the category of an audio stream.
-Note that only a subset of the audio stream categories are valid for certain stream types.
Stream type | Valid categories |
---|---|
Render stream | All categories are valid. |
Capture stream | AudioCategory_Communications, AudioCategory_Speech, AudioCategory_Other |
Loopback stream | AudioCategory_Other |
?
Games should categorize their music streams as AudioCategory_GameMedia so that game music mutes automatically if another application plays music in the background. Music or video applications should categorize their streams as AudioCategory_Media or AudioCategory_Movie so they will take priority over AudioCategory_GameMedia streams.
The values AudioCategory_ForegroundOnlyMedia and AudioCategory_BackgroundCapableMedia are deprecated. For Windows Store apps, these values will continue to function the same when running on Windows?10 as they did on Windows?8.1. Attempting to use these values in a Universal Windows Platform (UWP) app, will result in compilation errors and an exception at runtime. Using these values in a Windows desktop application built with the Windows?10 SDK the will result in a compilation error.
-Other audio stream.
Media that will only stream when the app is in the foreground. This enumeration value has been deprecated. For more information, see the Remarks section.
Real-time communications, such as VOIP or chat.
Alert sounds.
Sound effects.
Game sound effects.
Background audio for games.
Game chat audio. Similar to AudioCategory_Communications except that AudioCategory_GameChat will not attenuate other streams.
Speech.
Stream that includes audio with dialog.
Stream that includes audio without dialog.