Moved example source files into sub folder

This commit is contained in:
saschawillems 2017-11-12 19:32:09 +01:00
parent a17e3924b3
commit 94a076e1ae
69 changed files with 685 additions and 164 deletions

View file

@ -144,66 +144,4 @@ ENDIF(WIN32)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/bin/")
add_subdirectory(base)
set(EXAMPLES
bloom
computecloth
computecullandlod
computeheadless
computenbody
computeparticles
computeshader
debugmarker
deferred
deferredmultisampling
deferredshadows
displacement
distancefieldfonts
dynamicuniformbuffer
gears
geometryshader
hdr
imgui
indirectdraw
instancing
mesh
multisampling
multithreading
occlusionquery
offscreen
parallaxmapping
particlefire
pbrbasic
pbribl
pbrtexture
pipelines
pipelinestatistics
pushconstants
radialblur
raytracing
renderheadless
scenerendering
screenshot
shadowmapping
shadowmappingomni
skeletalanimation
specializationconstants
sphericalenvmapping
ssao
stencilbuffer
subpasses
terraintessellation
tessellation
textoverlay
texture
texture3d
texturearray
texturecubemap
texturemipmapgen
texturesparseresidency
triangle
viewportarray
vulkanscene
)
buildExamples()
add_subdirectory(examples)

View file

@ -63,82 +63,82 @@ Building for *iOS* and *macOS* is done using the [examples](xcode/examples.xcode
## Basics
### [Triangle](triangle/)
### [Triangle](examples/triangle/)
<img src="./screenshots/basic_triangle.png" height="72px" align="right">
Most basic example. Renders a colored triangle using an indexed vertex buffer. Vertex and index data are uploaded to device local memory using so-called "staging buffers". Uses a single pipeline with basic shaders loaded from SPIR-V and and single uniform block for passing matrices that is updated on changing the view.
This example is far more explicit than the other examples and is meant to be a starting point for learning Vulkan from the ground up. Much of the code is boilerplate that you'd usually encapsulate in helper functions and classes (which is what the other examples do).
### [Pipelines](pipelines/)
### [Pipelines](examples/pipelines/)
<img src="./screenshots/basic_pipelines.png" height="72px" align="right">
[Pipeline state objects](https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#pipelines) replace the biggest part of the dynamic state machine from OpenGL, baking state information for culling, blending, rasterization, etc. and shaders into a fixed state that can be optimized much easier by the implementation.
This example uses three different PSOs for rendering the same scene with different visuals and shaders and also demonstrates the use of [pipeline derivatives](https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#pipelines-pipeline-derivatives).
### [Texture mapping](texture/)
### [Texture mapping](examples/texture/)
<img src="./screenshots/basic_texture.png" height="72px" align="right">
Shows how to upload a 2D texture into video memory for sampling in a shader. Loads a compressed texture into a host visible staging buffer and copies all mip levels to a device local optimal tiled image for best performance.
Also demonstrates the use of combined image samplers. Samplers are detached from the actual texture image and only contain information on how an image is sampled in the shader.
### [Cube maps](texturecubemap/)
### [Cube maps](examples/texturecubemap/)
<img src="./screenshots/texture_cubemap.jpg" height="72px" align="right">
Building on the basic texture loading example, a cubemap texture is loaded into a staging buffer and is copied over to a device local optimal image using buffer to image copies for all of it's faces and mip maps.
The demo then uses two different pipelines (and shader sets) to display the cubemap as a skybox (background) and as a source for reflections.
### [Texture arrays](texturearray/)
### [Texture arrays](examples/texturearray/)
<img src="./screenshots/texture_array.png" height="72px" align="right">
Texture arrays allow storing of multiple images in different layers without any interpolation between the layers.
This example demonstrates the use of a 2D texture array with instanced rendering. Each instance samples from a different layer of the texture array.
### [Mesh rendering](mesh/)
### [Mesh rendering](examples/mesh/)
<img src="./screenshots/basic_mesh.png" height="72px" align="right">
Uses [assimp](https://github.com/assimp/assimp) to load a mesh from a common 3D format including a color map. The mesh data is then converted to a fixed vertex layout matching the shader vertex attribute bindings.
### [Dynamic uniform buffers](dynamicuniformbuffer/) :speech_balloon:
### [Dynamic uniform buffers](examples/dynamicuniformbuffer/) :speech_balloon:
<img src="./screenshots/dynamicuniformbuffer.jpg" height="72px" align="right">
Demonstrates the use of dynamic uniform buffers for rendering multiple objects with different matrices from one big uniform buffer object. Sets up one big uniform buffer that contains multiple model matrices that are dynamically addressed upon descriptor binding time.
This minimizes the number of descriptor sets required and may help in optimizing memory writes by e.g. only doing partial updates to that memory.
### [Push constants](pushconstants/)
### [Push constants](examples/pushconstants/)
<img src="./screenshots/push_constants.png" height="72px" align="right">
Demonstrates the use of push constants for updating small blocks of shader data at pipeline creation time, without having to use a uniform buffer. Displays several light sources with position updates through a push constant block in a separate command buffer.
### [Specialization constants](specializationconstants/)
### [Specialization constants](examples/specializationconstants/)
<img src="./screenshots/specialization_constants.jpg" height="72px" align="right">
Demonstrates the use of SPIR-V specialization constants used to specify shader constants at pipeline creation time. The example uses one "uber" shader with different lighting paths (phong, toon, texture mapped) from which all pipelines are build, with a specialization constant used to select the shader path to be used for that pipeline at creation time.
### [Offscreen rendering](offscreen/)
### [Offscreen rendering](examples/offscreen/)
<img src="./screenshots/basic_offscreen.jpg" height="72px" align="right">
Shows how to do basic offscreen rendering. Uses a separate framebuffer with color and depth attachments (that is not part of the swap chain) to render the mirrored scene off screen in the first pass.
The second pass then samples from the color attachment of that framebuffer for rendering a mirror surface.
### [Fullscreen radial blur](radialblur/)
### [Fullscreen radial blur](examples/radialblur/)
<img src="./screenshots/radial_blur.png" height="72px" align="right">
Demonstrates the basics of a fullscreen (fragment) shader effect. The scene is rendered into a low resolution offscreen framebuffer first and blended on top of the scene in a second pass. The fragment shader also applies a radial blur to it.
### [Text rendering](textoverlay/)
### [Text rendering](examples/textoverlay/)
<img src="./screenshots/textoverlay.png" height="72px" align="right">
Renders a 2D text overlay on top of an existing 3D scene. The example implements a text overlay class with separate descriptor sets, layouts, pipelines and render pass to detach it from the rendering of the scene. The font is generated by loading glyph data from a [stb font file](http://nothings.org/stb/font/) into a buffer that's copied to the font image.
After rendering the scene, the second render pass of the text overlay class loads the contents of the first render pass and displays text on top of it using blending.
### [CPU particles](particlefire/)
### [CPU particles](examples/particlefire/)
<img src="./screenshots/particlefire.jpg" height="72px" align="right">
CPU based point sprite particle system simulating a fire. Particles and their attributes are stored in a host visible vertex buffer that's updated on the CPU on each frame. Demonstrates how to update vertex buffer per frame.
@ -147,74 +147,74 @@ Also makes use of pre-multiplied alpha for rendering particles with different bl
## Advanced
### [Multi threaded command buffer generation](multithreading/)
### [Multi threaded command buffer generation](examples/multithreading/)
<img src="./screenshots/multithreading.jpg" height="72px" align="right">
This example demonstrates multi threaded command buffer generation. All available hardware threads are used to generated n secondary command buffers concurrent, with each thread also checking object visibility against the current viewing frustum. Command buffers are rebuilt on each frame.
Once all threads have finished (and all secondary command buffers have been constructed), the secondary command buffers are executed inside the primary command buffer and submitted to the queue.
### [Scene rendering](scenerendering/)
### [Scene rendering](examples/scenerendering/)
<img src="./screenshots/scenerendering.jpg" height="72px" align="right">
This example demonstrates a way to render a complex scene consisting of multiple meshes with different materials and textures. It makes use of separate per-material descriptor sets for passing texturing information and uses push constants to pass material properties to the shaders upon pipeline creation.
Also shows how to use multiple descriptor sets simultaneously with the new GLSL "set" layout qualifier introduced with [GL_KHR_vulkan_glsl](https://www.khronos.org/registry/vulkan/specs/misc/GL_KHR_vulkan_glsl.txt).
### [Instancing](instancing/)
### [Instancing](examples/instancing/)
<img src="./screenshots/instancing.jpg" height="72px" align="right">
Uses instancing for rendering multiple instances of the same mesh using different attributes. A secondary vertex buffer containing instanced data (in device local memory) is used to pass instanced data to the shader via vertex attributes, including a texture layer index for using different textures per-instance. Also shows how to mix instanced and non-instanced object rendering.
<br><br>
### [Indirect drawing](indirectdraw/) :speech_balloon:
### [Indirect drawing](examples/indirectdraw/) :speech_balloon:
<img src="./screenshots/indirectdraw.jpg" height="72px" align="right">
This example renders thousands of instanced objects with different geometries using only one single indirect draw call (if ```multiDrawIndirect``` is supported). Unlike direct drawing function, indirect drawing functions take their draw commands from a buffer object containing information like index count, index offset and number of instances to draw.
Shows how to generate and render such an indirect draw command buffer that is staged to the device. Indirect draw buffers are the base for generating and updating draw commands on the GPU using shaders.
### [High dynamic range](hdr/)
### [High dynamic range]examples/(hdr/)
<img src="./screenshots/hdr.jpg" height="72px" align="right">
Demonstrates high dynamic range rendering using floating point texture and framebuffer formats, extending the internal image precision range from the usual 8 Bits used in LDR to 16/32 bits. Also adds HDR bloom on top of the scene using a separable blur filter and manual exposure via tone mapping.
### [Occlusion queries](occlusionquery/)
### [Occlusion queries](examples/occlusionquery/)
<img src="./screenshots/occlusion_queries.png" height="72px" align="right">
Shows how to use occlusion queries to determine object visibility depending on the number of passed samples for a given object. Does an occlusion pass first, drawing all objects (and the occluder) with basic shaders, then reads the query results to conditionally color the objects during the final pass depending on their visibility.
### [Run-time mip-map generation](texturemipmapgen/) :speech_balloon:
### [Run-time mip-map generation](examples/texturemipmapgen/) :speech_balloon:
<img src="./screenshots/texture_mipmap_gen.jpg" height="72px" align="right">
Generates a complete mip-chain at runtime (instead of using mip levels stored in texture file) by blitting from one mip level down to the next smaller size until the lower end of the mip chain (1x1 pixels is reached).
This is done using image blits and proper image memory barriers.
### [Multi sampling](multisampling/)
### [Multi sampling](examples/multisampling/)
<img src="./screenshots/multisampling.png" height="72px" align="right">
Demonstrates the use of resolve attachments for doing multisampling. Instead of doing an explicit resolve from a multisampled image this example creates multisampled attachments for the color and depth buffer and sets up the render pass to use these as resolve attachments that will get resolved to the visible frame buffer at the end of this render pass. To highlight MSAA the example renders a mesh with fine details against a bright background. Here is a [screenshot without MSAA](./screenshots/multisampling_nomsaa.png) to compare.
### [Shadow mapping](shadowmapping/)
### [Shadow mapping](examples/shadowmapping/)
<img src="./screenshots/shadowmapping.png" height="72px" align="right">
Dynamic shadows from a ```directional light source``` in two passes. The first pass renders the scene depth from the light's point-of-view into a separate framebuffer attachment with a different (higher) resolution.
The second pass renders the scene from the camera's point-of-view and compares the depth value of the texels with the one stored in the offscreen depth attachment (which the shader directly samples from) to determine whether a texel is shadowed or not and then applies a PCF filter to smooth out shadow borders. To avoid shadow artefacts the dynamic depth bias state ([vkCmdSetDepthBias](https://www.khronos.org/registry/vulkan/specs/1.0/man/html/vkCmdSetDepthBias.html)) is used to apply a constant and slope depth bias factor.
### [Omnidirectional shadow mapping](shadowmappingomni/)
### [Omnidirectional shadow mapping](examples/shadowmappingomni/)
<img src="./screenshots/shadow_omnidirectional.png" height="72px" align="right">
Dynamic shadows from a ```point light source```. Uses a dynamic 32 bit floating point cube map for a point light source that casts shadows in all directions (unlike projective shadow mapping).
The cube map faces contain the distances from the light sources, which are then used in the final scene rendering pass to determine if the fragment is shadowed or not.
### [Skeletal animation](skeletalanimation/)
### [Skeletal animation](examples/skeletalanimation/)
<img src="./screenshots/mesh_skeletalanimation.png" height="72px" align="right">
This example loads and displays a rigged COLLADA model including animations. Bone weights are extracted for each vertex and are passed to the vertex shader together with the final bone transformation matrices for vertex position calculations.
### [Bloom](bloom/)
### [Bloom](examples/bloom/)
<img src="./screenshots/bloom.jpg" height="72px" align="right">
Advanced fullscreen shader example implementing a separated gaussian blur using two passes. The glowing parts of the scene are rendered to a low-resolution offscreen framebuffer that is blurred in two steps and then blended on top of the scene.
@ -223,14 +223,14 @@ Advanced fullscreen shader example implementing a separated gaussian blur using
*These examples use a [deferred shading](https://en.wikipedia.org/wiki/Deferred_shading) setup*
### [Deferred shading](deferred/)
### [Deferred shading](examples/deferred/)
<img src="./screenshots/deferred_shading.jpg" height="72px" align="right">
Demonstrates the use of multiple render targets to fill a G-Buffer for a deferred shading setup with multiple dynamic lights and normal mapped surfaces.
Deferred shading collects all values (color, normal, position) into different render targets in one pass thanks to multiple render targets, and then does all shading and lighting calculations based on these in screen space, thus allowing for much more light sources than traditional forward renderers.
### [Deferred shading and shadow mapping](deferredshadows/)
### [Deferred shading and shadow mapping](examples/deferredshadows/)
<img src="./screenshots/deferred_shadows.jpg" height="72px" align="right">
Building on the deferred shading setup this example adds directional shadows using shadow maps from multiple spotlights.
@ -239,7 +239,7 @@ Scene depth from the different light's point-of-view is renderer to a layered de
The final scene compositing pass then samples from the layered depth map to determine if a fragment is shadowed or not.
### [Screen space ambient occlusion](ssao/)
### [Screen space ambient occlusion](examples/ssao/)
<img src="./screenshots/ssao.jpg" height="72px" align="right">
Implements ambient occlusion in screen space, adding depth with the help of ambient occlusion to a scene. The example is using a deferred shading setup with the AO pass using the depth information from the deferred G-Buffer to generate the ambient occlusion values. A second pass is then applied to blur the AO results before they're applied to the scene in the final composition pass.
@ -248,17 +248,17 @@ Implements ambient occlusion in screen space, adding depth with the help of ambi
*Physical based rendering as a lighting technique that achieves a more realistic and dynamic look by applying approximations of bidirectional reflectance distribution functions that rely on measured real-world material parameters and environment lighting.*
### [Physical shading basics](pbrbasic/)
### [Physical shading basics](examples/pbrbasic/)
<img src="./screenshots/pbrbasic.jpg" height="72px" align="right">
Basic implementation of a metallic-roughness based physical based rendering model using measured material parameters. Implements a specular BRDF based on material parameters for metallic reflectance, surface roughness and color and displays a grid of objects with varying metallic and roughness parameters light by multiple fixed light sources.
### [Physical shading with image based lighting](pbribl/)
### [Physical shading with image based lighting](examples/pbribl/)
<img src="./screenshots/pbribl.jpg" height="72px" align="right">
Adds ```image based lighting``` to the PBR equation. IBL uses the surrounding environment as a single light source. This adds an even more realistic look the models as the light contribution used by the materials is now controlled by the environment. The sample uses a fixed HDR environment cubemap as for lighting and reflectance. The new textures and cubemaps required for the enhanced lighting (BRDF 2D-LUT, irradiance cube and a filtered cube based on roughness) are generated at run-time based on that cubemap.
### [Physical shading with textures and image based lighting](pbrtexture/)
### [Physical shading with textures and image based lighting](examples/pbrtexture/)
<img src="./screenshots/pbrtexture.jpg" height="72px" align="right">
This example adds a textured model with materials especially created for the metallic-roughness PBR workflow. Where the other examples used fixed material parameters for the PBR equation (metallic, roughness, albedo), this model contains texture maps that store these values (plus a normal and ambient occlusion map) used as input parameters for the BRDF shader. So even though the model uses only one material there are differing roughness and metallic areas and combined with image based lighting based on the environment the model is rendered with a realistic look.
@ -267,31 +267,31 @@ This example adds a textured model with materials especially created for the met
*Compute shaders are mandatory in Vulkan and must be supported on all devices*
### [Particle system](computeparticles/)
### [Particle system](examples/computeparticles/)
<img src="./screenshots/compute_particles.jpg" height="72px" align="right">
Attraction based particle system. A shader storage buffer is used to store particle on which the compute shader does some physics calculations. The buffer is then used by the graphics pipeline for rendering with a gradient texture for. Demonstrates the use of memory barriers for synchronizing vertex buffer access between a compute and graphics pipeline
### [N-body simulation](computenbody/)
### [N-body simulation](examples/computenbody/)
<img src="./screenshots/compute_nbody.jpg" height="72px" align="right">
Implements a N-body simulation based particle system with multiple attractors and particle-to-particle interaction using two passes separating particle movement calculation and final integration.
Also shows how to use ```shared compute shader memory``` for a significant performance boost.
### [Ray tracing](raytracing/)
### [Ray tracing](examples/raytracing/)
<img src="./screenshots/compute_raytracing.jpg" height="72px" align="right">
Implements a simple ray tracer using a compute shader. No primitives are rendered by the traditional pipeline except for a fullscreen quad that displays the ray traced results of the scene rendered by the compute shaders. Also implements shadows and basic reflections.
### [Cull and LOD](computecullandlod/)
### [Cull and LOD](examples/computecullandlod/)
<img src="./screenshots/compute_cullandlod.jpg" height="72px" align="right">
Based on ```indirect drawing``` this example uses a compute shader for visibility testing using ```frustum culling``` and ```level-of-detail selection``` based on object's distance to the viewer.
A compute shader is applied to the indirect draw commands buffer that updates the indirect draw calls depending on object visibility and camera distance. This moves all visibility calculations to the GPU so the indirect draw buffer can stay in device local memory without having to map it back to the host for CPU-based updates.
### [Image processing](computeshader/)
### [Image processing](examples/computeshader/)
<img src="./screenshots/compute_imageprocessing.jpg" height="72px" align="right">
Demonstrates the basic use of a separate compute queue (and command buffer) to apply different convolution kernels on an input image in realtime.
@ -300,17 +300,17 @@ Demonstrates the basic use of a separate compute queue (and command buffer) to a
*Tessellation shader support is optional* (see ```deviceFeatures.tessellationShader```)
### [Displacement mapping](tessellation/)
### [Displacement mapping](examples/tessellation/)
<img src="./screenshots/tess_displacement.jpg" height="72px" align="right">
Uses tessellation shaders to generate additional details and displace geometry based on a heightmap.
### [Dynamic terrain tessellation](terraintessellation/)
### [Dynamic terrain tessellation](examples/terraintessellation/)
<img src="./screenshots/tess_dynamicterrain.jpg" height="72px" align="right">
Renders a terrain with dynamic tessellation based on screen space triangle size, resulting in closer parts of the terrain getting more details than distant parts. The terrain geometry is also generated by the tessellation shader using a 16 bit height map for displacement. To improve performance the example also does frustum culling in the tessellation shader.
### [PN-Triangles](tessellation/)
### [PN-Triangles](examples/tessellation/)
<img src="./screenshots/tess_pntriangles.jpg" height="72px" align="right">
Generating curved PN-Triangles on the GPU using tessellation shaders to add details to low-polygon meshes, based on [this paper](http://alex.vlachos.com/graphics/CurvedPNTriangles.pdf), with shaders from [this tutorial](http://onrendering.blogspot.de/2011/12/tessellation-on-gpu-curved-pn-triangles.html).
@ -319,45 +319,45 @@ Generating curved PN-Triangles on the GPU using tessellation shaders to add deta
*Geometry shader support is optional* (see ```deviceFeatures.geometryShader```)
### [Normal debugging](geometryshader/)
### [Normal debugging](examples/geometryshader/)
<img src="./screenshots/geom_normals.jpg" height="72px" align="right">
Uses a geometry shader to generate per-vertex normals that could be used for debugging. The first pass displays the solid mesh using basic phong shading and then does a second pass with the geometry shader that generates normals for each vertex of the mesh.
## Extensions
### [VK_EXT_debug_marker](debugmarker/)
### [VK_EXT_debug_marker](examples/debugmarker/)
<img src="./screenshots/ext_debugmarker.jpg" height="72px" align="right">
Example application to be used along with [this tutorial](http://www.saschawillems.de/?page_id=2017) for demonstrating the use of the new VK_EXT_debug_marker extension. Introduced with Vulkan 1.0.12, it adds functionality to set debug markers, regions and name objects for advanced debugging in an offline graphics debugger like [RenderDoc](http://www.renderdoc.org).
## Misc
### [Parallax mapping](parallaxmapping/)
### [Parallax mapping](examples/parallaxmapping/)
<img src="./screenshots/parallax_mapping.jpg" height="72px" align="right">
Implements multiple texture mapping methods to simulate depth based purely on texture information without generating additional geometry. Along with basic normal mapping the example includes parallax mapping, steep parallax mapping and parallax occlusion mapping, with the later being the best in quality but also with the highest performance impact.
### [Spherical environment mapping](sphericalenvmapping/)
### [Spherical environment mapping](examples/sphericalenvmapping/)
<img src="./screenshots/spherical_env_mapping.png" height="72px" align="right">
Uses a (spherical) material capture texture containing environment lighting and reflection information to fake complex lighting. The example also uses a texture array to store (and select) several material caps that can be toggled at runtime.
The technique is based on [this article](https://github.com/spite/spherical-environment-mapping).
### [Vulkan Gears](gears/)
### [Vulkan Gears](examples/gears/)
<img src="./screenshots/basic_gears.png" height="72px" align="right">
Vulkan interpretation of glxgears. Procedurally generates separate meshes for each gear, with every mesh having it's own uniform buffer object for animation. Also demonstrates how to use different descriptor sets.
### [Distance field fonts](distancefieldfonts/)
### [Distance field fonts](examples/distancefieldfonts/)
<img src="./screenshots/font_distancefield.png" height="72px" align="right">
Instead of just sampling a bitmap font texture, a texture with per-character signed distance fields is used to generate high quality glyphs in the fragment shader. This results in a much higher quality than common bitmap fonts, even if heavily zoomed.
Distance field font textures can be generated with tools like [Hiero](https://github.com/libgdx/libgdx/wiki/Hiero).
### [Vulkan demo scene](vulkanscene/)
### [Vulkan demo scene](examples/vulkanscene/)
<img src="./screenshots/vulkan_scene.png" height="72px" align="right">
More of a playground than an actual example. Renders the Vulkan logo using multiple meshes with different shaders (and pipelines) including a background.

View file

@ -16,7 +16,7 @@ include $(CLEAR_VARS)
LOCAL_MODULE := %APK_NAME%
PROJECT_FILES := $(wildcard $(LOCAL_PATH)/../../%SRC_FOLDER%/*.cpp)
PROJECT_FILES := $(wildcard $(LOCAL_PATH)/../../examples/%SRC_FOLDER%/*.cpp)
PROJECT_FILES += $(wildcard $(LOCAL_PATH)/../../base/*.cpp)
PROJECT_FILES += $(wildcard $(LOCAL_PATH)/../../external/imgui/imgui.cpp $(LOCAL_PATH)/../../external/imgui/imgui_draw.cpp)

View file

@ -1,12 +0,0 @@
FOR /d /r . %%x IN (x64) DO @IF EXIST "%%x" rd /s /q "%%x"
cd bin
del *.ilk
del *.lastcodeanalysissucceeded
del *.obj
del *.idb
del *.pdb
del *.log
del *.tlog
del *.xml
FOR /d /r . %%x IN (*tlog) DO @IF EXIST "%%x" rd /s /q "%%x"
cd..

102
examples/CMakeLists.txt Normal file
View file

@ -0,0 +1,102 @@
# Function for building single example
function(buildExample EXAMPLE_NAME)
SET(EXAMPLE_FOLDER ${CMAKE_CURRENT_SOURCE_DIR}/${EXAMPLE_NAME})
message(STATUS "Generating project file for example in ${EXAMPLE_FOLDER}")
# Main
file(GLOB SOURCE *.cpp ${BASE_HEADERS} ${EXAMPLE_FOLDER}/*.cpp)
SET(MAIN_CPP ${EXAMPLE_FOLDER}/${EXAMPLE_NAME}.cpp)
if(EXISTS ${EXAMPLE_FOLDER}/main.cpp)
SET(MAIN_CPP ${EXAMPLE_FOLDER}/main.cpp)
ENDIF()
# imgui example requires additional source files
IF(${EXAMPLE_NAME} STREQUAL "imgui")
file(GLOB ADD_SOURCE "../external/imgui/*.cpp")
SET(SOURCE ${SOURCE} ${ADD_SOURCE})
ENDIF()
# Add shaders
set(SHADER_DIR "../data/shaders/${EXAMPLE_NAME}")
file(GLOB SHADERS "${SHADER_DIR}/*.vert" "${SHADER_DIR}/*.frag" "${SHADER_DIR}/*.comp" "${SHADER_DIR}/*.geom" "${SHADER_DIR}/*.tesc" "${SHADER_DIR}/*.tese")
source_group("Shaders" FILES ${SHADERS})
if(WIN32)
add_executable(${EXAMPLE_NAME} WIN32 ${MAIN_CPP} ${SOURCE} ${SHADERS})
target_link_libraries(${EXAMPLE_NAME} base ${Vulkan_LIBRARY} ${ASSIMP_LIBRARIES} ${WINLIBS})
else(WIN32)
add_executable(${EXAMPLE_NAME} ${MAIN_CPP} ${SOURCE} ${SHADERS})
target_link_libraries(${EXAMPLE_NAME} base )
endif(WIN32)
if(RESOURCE_INSTALL_DIR)
install(TARGETS ${EXAMPLE_NAME} DESTINATION ${CMAKE_INSTALL_BINDIR})
endif()
endfunction(buildExample)
# Build all examples
function(buildExamples)
foreach(EXAMPLE ${EXAMPLES})
buildExample(${EXAMPLE})
endforeach(EXAMPLE)
endfunction(buildExamples)
set(EXAMPLES
bloom
computecloth
computecullandlod
computeheadless
computenbody
computeparticles
computeshader
debugmarker
deferred
deferredmultisampling
deferredshadows
displacement
distancefieldfonts
dynamicuniformbuffer
gears
geometryshader
hdr
imgui
indirectdraw
instancing
mesh
multisampling
multithreading
occlusionquery
offscreen
parallaxmapping
particlefire
pbrbasic
pbribl
pbrtexture
pipelines
pipelinestatistics
pushconstants
radialblur
raytracing
renderheadless
scenerendering
screenshot
shadowmapping
shadowmappingomni
skeletalanimation
specializationconstants
sphericalenvmapping
ssao
stencilbuffer
subpasses
terraintessellation
tessellation
textoverlay
texture
texture3d
texturearray
texturecubemap
texturemipmapgen
texturesparseresidency
timestampquery
triangle
viewportarray
vulkanscene
)
buildExamples()

View file

@ -20,6 +20,7 @@
#include <vulkan/vulkan.h>
#include "vulkanexamplebase.h"
#include "VulkanModel.hpp"
#include "VulkanTexture.hpp"
#define VERTEX_BUFFER_BIND_ID 0
#define ENABLE_VALIDATION false
@ -44,8 +45,13 @@ public:
struct {
vks::Model object;
vks::Model leaves;
} models;
struct {
vks::Texture2D leaf;
} textures;
struct {
glm::mat4 projection;
glm::mat4 model;
@ -54,6 +60,7 @@ public:
struct {
glm::mat4 projection;
glm::mat4 model;
glm::vec2 viewportDim;
} uboGS;
struct {
@ -121,7 +128,7 @@ public:
VkCommandBufferBeginInfo cmdBufInfo = vks::initializers::commandBufferBeginInfo();
VkClearValue clearValues[2];
clearValues[0].color = { { 0.0f, 0.0f, 0.0f, 0.0f } };
clearValues[0].color = { { 0.0f, 0.0f, 0.2f, 0.0f } };
clearValues[1].depthStencil = { 1.0f, 0 };
VkRenderPassBeginInfo renderPassBeginInfo = vks::initializers::renderPassBeginInfo();
@ -142,8 +149,7 @@ public:
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
VkViewport viewport = vks::initializers::viewport((float)width, (float)height, 0.0f, 1.0f
);
VkViewport viewport = vks::initializers::viewport((float)width, (float)height, 0.0f, 1.0f);
vkCmdSetViewport(drawCmdBuffers[i], 0, 1, &viewport);
VkRect2D scissor = vks::initializers::rect2D(width, height, 0, 0);
@ -164,6 +170,8 @@ public:
// Normal debugging
if (displayNormals)
{
vkCmdBindVertexBuffers(drawCmdBuffers[i], VERTEX_BUFFER_BIND_ID, 1, &models.leaves.vertices.buffer, offsets);
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.leaves.indices.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.normals);
vkCmdDrawIndexed(drawCmdBuffers[i], models.object.indexCount, 1, 0, 0, 0);
}
@ -176,7 +184,9 @@ public:
void loadAssets()
{
models.object.loadFromFile(getAssetPath() + "models/suzanne.obj", vertexLayout, 0.25f, vulkanDevice, queue);
models.object.loadFromFile(getAssetPath() + "models/tree.dae", vertexLayout, 0.25f, vulkanDevice, queue);
models.leaves.loadFromFile(getAssetPath() + "models/tree_leaves.dae", vertexLayout, 0.25f, vulkanDevice, queue);
textures.leaf.loadFromFile(getAssetPath() + "textures/leaf.ktx", VK_FORMAT_R8G8B8A8_UNORM, vulkanDevice, queue);
}
void setupVertexDescriptions()
@ -227,10 +237,9 @@ public:
void setupDescriptorPool()
{
// Example uses two ubos
std::vector<VkDescriptorPoolSize> poolSizes =
{
std::vector<VkDescriptorPoolSize> poolSizes = {
vks::initializers::descriptorPoolSize(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 2),
vks::initializers::descriptorPoolSize(VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1),
};
VkDescriptorPoolCreateInfo descriptorPoolInfo =
@ -255,7 +264,12 @@ public:
vks::initializers::descriptorSetLayoutBinding(
VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
VK_SHADER_STAGE_GEOMETRY_BIT,
1)
1),
// Binding 2 : tba
vks::initializers::descriptorSetLayoutBinding(
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER,
VK_SHADER_STAGE_FRAGMENT_BIT,
2),
};
VkDescriptorSetLayoutCreateInfo descriptorLayout =
@ -296,7 +310,13 @@ public:
descriptorSet,
VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
1,
&uniformBuffers.GS.descriptor)
&uniformBuffers.GS.descriptor),
// Binding 2 : tba
vks::initializers::writeDescriptorSet(
descriptorSet,
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER,
2,
&textures.leaf.descriptor),
};
vkUpdateDescriptorSets(device, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, NULL);
@ -313,7 +333,7 @@ public:
VkPipelineRasterizationStateCreateInfo rasterizationState =
vks::initializers::pipelineRasterizationStateCreateInfo(
VK_POLYGON_MODE_FILL,
VK_CULL_MODE_BACK_BIT,
VK_CULL_MODE_NONE,
VK_FRONT_FACE_CLOCKWISE,
0);
@ -427,6 +447,7 @@ public:
// Geometry shader
uboGS.model = uboVS.model;
uboGS.projection = uboVS.projection;
uboGS.viewportDim = glm::vec2(width, height);
memcpy(uniformBuffers.GS.mapped, &uboGS, sizeof(uboGS));
}

View file

@ -43,7 +43,6 @@ class VulkanExample : public VulkanExampleBase
{
public:
bool displayShadowMap = false;
bool lightPOV = false;
bool filterPCF = true;
// Keep depth range as small as possible
@ -720,7 +719,6 @@ public:
renderPass,
0);
pipelineCreateInfo.pVertexInputState = &vertices.inputState;
pipelineCreateInfo.pInputAssemblyState = &inputAssemblyState;
pipelineCreateInfo.pRasterizationState = &rasterizationState;
pipelineCreateInfo.pColorBlendState = &colorBlendState;
@ -735,8 +733,13 @@ public:
rasterizationState.cullMode = VK_CULL_MODE_NONE;
shaderStages[0] = loadShader(getAssetPath() + "shaders/shadowmapping/quad.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getAssetPath() + "shaders/shadowmapping/quad.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
// Empty vertex input state
VkPipelineVertexInputStateCreateInfo emptyInputState = vks::initializers::pipelineVertexInputStateCreateInfo();
pipelineCreateInfo.pVertexInputState = &emptyInputState;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCreateInfo, nullptr, &pipelines.quad));
pipelineCreateInfo.pVertexInputState = &vertices.inputState;
// Scene rendering with shadows applied
rasterizationState.cullMode = VK_CULL_MODE_BACK_BIT;
shaderStages[0] = loadShader(getAssetPath() + "shaders/shadowmapping/scene.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
@ -838,13 +841,6 @@ public:
uboVSscene.lightPos = lightPos;
// Render scene from light's point of view
if (lightPOV)
{
uboVSscene.projection = glm::perspective(glm::radians(lightFOV), (float)width / (float)height, zNear, zFar);
uboVSscene.view = glm::lookAt(lightPos, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
}
uboVSscene.depthBiasMVP = uboOffscreenVS.depthMVP;
memcpy(uniformBuffers.scene.mapped, &uboVSscene, sizeof(uboVSscene));
@ -944,9 +940,6 @@ public:
if (overlay->checkBox("PCF filtering", &filterPCF)) {
buildCommandBuffers();
}
if (overlay->checkBox("Light POV", &lightPOV)) {
viewChanged();
}
}
}
};

View file

@ -1,9 +1,19 @@
/*
* Vulkan Example - Using subpasses for G-Buffer compositing
*
* Copyright (C) 2016 by Sascha Willems - www.saschawillems.de
* Copyright (C) 2016-2017 by Sascha Willems - www.saschawillems.de
*
* This code is licensed under the MIT license (MIT) (http://opensource.org/licenses/MIT)
*
* Summary:
* Implements a deferred rendering setup with a forward transparency pass using sub passes
*
* Sub passes allow reading from the previous framebuffer (in the same render pass) at
* the same pixel position.
*
* This is a feature that was especially designed for tile-based-renderers
* (mostly mobile GPUs) and is a new optomization feature in Vulkan for those GPU types.
*
*/
#include <stdio.h>
@ -463,9 +473,6 @@ public:
VK_CHECK_RESULT(vkBeginCommandBuffer(drawCmdBuffers[i], &cmdBufInfo));
// First sub pass
// Renders the components of the scene to the G-Buffer atttachments
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
VkViewport viewport = vks::initializers::viewport((float)width, (float)height, 0.0f, 1.0f);
@ -476,15 +483,24 @@ public:
VkDeviceSize offsets[1] = { 0 };
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.offscreen);
// First sub pass
// Renders the components of the scene to the G-Buffer atttachments
{
vks::debugmarker::beginRegion(drawCmdBuffers[i], "Subpass 0: Deferred G-Buffer creation", glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.offscreen);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.offscreen, 0, 1, &descriptorSets.scene, 0, NULL);
vkCmdBindVertexBuffers(drawCmdBuffers[i], VERTEX_BUFFER_BIND_ID, 1, &models.scene.vertices.buffer, offsets);
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.scene.indices.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(drawCmdBuffers[i], models.scene.indexCount, 1, 0, 0, 0);
vks::debugmarker::endRegion(drawCmdBuffers[i]);
}
// Second sub pass
// This subpass will use the G-Buffer components that have been filled in the first subpass as input attachment for the final compositing
{
vks::debugmarker::beginRegion(drawCmdBuffers[i], "Subpass 1: Deferred composition", glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
vkCmdNextSubpass(drawCmdBuffers[i], VK_SUBPASS_CONTENTS_INLINE);
@ -492,8 +508,14 @@ public:
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.composition, 0, 1, &descriptorSets.composition, 0, NULL);
vkCmdDraw(drawCmdBuffers[i], 3, 1, 0, 0);
vks::debugmarker::endRegion(drawCmdBuffers[i]);
}
// Third subpass
// Render transparent geometry using a forward pass that compares against depth generted during G-Buffer fill
{
vks::debugmarker::beginRegion(drawCmdBuffers[i], "Subpass 2: Forward transparency", glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
vkCmdNextSubpass(drawCmdBuffers[i], VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.transparent);
@ -502,6 +524,9 @@ public:
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.transparent.indices.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(drawCmdBuffers[i], models.transparent.indexCount, 1, 0, 0, 0);
vks::debugmarker::endRegion(drawCmdBuffers[i]);
}
vkCmdEndRenderPass(drawCmdBuffers[i]);
VK_CHECK_RESULT(vkEndCommandBuffer(drawCmdBuffers[i]));

View file

@ -0,0 +1,454 @@
/*
* Vulkan Example - Using device timestamps for performance measurements
*
* Copyright (C) 2017 by Sascha Willems - www.saschawillems.de
*
* This code is licensed under the MIT license (MIT) (http://opensource.org/licenses/MIT)
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <vector>
#define GLM_FORCE_RADIANS
#define GLM_FORCE_DEPTH_ZERO_TO_ONE
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <vulkan/vulkan.h>
#include "vulkanexamplebase.h"
#include "VulkanBuffer.hpp"
#include "VulkanModel.hpp"
#define ENABLE_VALIDATION false
#define OBJ_DIM 0.05f
class VulkanExample : public VulkanExampleBase
{
public:
// Vertex layout for the models
vks::VertexLayout vertexLayout = vks::VertexLayout({
vks::VERTEX_COMPONENT_POSITION,
vks::VERTEX_COMPONENT_NORMAL,
vks::VERTEX_COMPONENT_COLOR,
});
struct Models {
vks::Model skybox;
std::vector<vks::Model> objects;
int32_t objectIndex = 3;
std::vector<std::string> names;
} models;
struct {
vks::Buffer VS;
} uniformBuffers;
struct UBOVS {
glm::mat4 projection;
glm::mat4 modelview;
glm::vec4 lightPos = glm::vec4(-10.0f, -10.0f, 10.0f, 1.0f);
} uboVS;
std::vector<VkPipeline> pipelines;
std::vector<std::string> pipelineNames;
int32_t pipelineIndex = 0;
VkPipelineLayout pipelineLayout;
VkDescriptorSet descriptorSet;
VkDescriptorSetLayout descriptorSetLayout;
VkQueryPool queryPool;
std::vector<float> timings;
int32_t gridSize = 3;
VulkanExample() : VulkanExampleBase(ENABLE_VALIDATION)
{
title = "Device timestamps";
camera.type = Camera::CameraType::firstperson;
camera.setPosition(glm::vec3(-4.0f, 3.0f, -3.75f));
camera.setRotation(glm::vec3(-15.25f, -46.5f, 0.0f));
camera.movementSpeed = 4.0f;
camera.setPerspective(60.0f, (float)width / (float)height, 0.1f, 256.0f);
camera.rotationSpeed = 0.25f;
settings.overlay = true;
}
~VulkanExample()
{
for (auto& pipeline : pipelines) {
vkDestroyPipeline(device, pipeline, nullptr);
}
vkDestroyPipelineLayout(device, pipelineLayout, nullptr);
vkDestroyDescriptorSetLayout(device, descriptorSetLayout, nullptr);
vkDestroyQueryPool(device, queryPool, nullptr);
uniformBuffers.VS.destroy();
for (auto& model : models.objects) {
model.destroy();
}
//models.skybox.destroy();
}
// Setup a query pool for storing device timestamp query results
void setupQueryPool()
{
timings.resize(3);
// Create query pool
VkQueryPoolCreateInfo queryPoolInfo = {};
queryPoolInfo.sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO;
queryPoolInfo.queryType = VK_QUERY_TYPE_TIMESTAMP;
queryPoolInfo.queryCount = static_cast<uint32_t>(timings.size() - 1);
VK_CHECK_RESULT(vkCreateQueryPool(device, &queryPoolInfo, NULL, &queryPool));
}
// Retrieves the results of the occlusion queries submitted to the command buffer
void getQueryResults()
{
timings.resize(2);
uint32_t start = 0;
uint32_t end = 0;
vkGetQueryPoolResults(device, queryPool, 0, 1, sizeof(uint32_t), &start, 0, VK_QUERY_RESULT_WAIT_BIT);
// timestampPeriod is the number of nanoseconds per timestamp value increment
float factor = 1e6f * deviceProperties.limits.timestampPeriod;
vkGetQueryPoolResults(device, queryPool, 1, 1, sizeof(uint32_t), &end, 0, VK_QUERY_RESULT_WAIT_BIT);
timings[0] = (float)(end - start) / factor;
//end = start;
vkGetQueryPoolResults(device, queryPool, 2, 1, sizeof(uint32_t), &end, 0, VK_QUERY_RESULT_WAIT_BIT);
timings[1] = (float)(end - start) / factor;
}
void buildCommandBuffers()
{
VkCommandBufferBeginInfo cmdBufInfo = vks::initializers::commandBufferBeginInfo();
VkClearValue clearValues[2];
clearValues[0].color = defaultClearColor;
clearValues[1].depthStencil = { 1.0f, 0 };
VkRenderPassBeginInfo renderPassBeginInfo = vks::initializers::renderPassBeginInfo();
renderPassBeginInfo.renderPass = renderPass;
renderPassBeginInfo.renderArea.offset.x = 0;
renderPassBeginInfo.renderArea.offset.y = 0;
renderPassBeginInfo.renderArea.extent.width = width;
renderPassBeginInfo.renderArea.extent.height = height;
renderPassBeginInfo.clearValueCount = 2;
renderPassBeginInfo.pClearValues = clearValues;
for (int32_t i = 0; i < drawCmdBuffers.size(); ++i) {
renderPassBeginInfo.framebuffer = frameBuffers[i];
VK_CHECK_RESULT(vkBeginCommandBuffer(drawCmdBuffers[i], &cmdBufInfo));
// Reset timestamp query pool
vkCmdResetQueryPool(drawCmdBuffers[i], queryPool, 0, 2);
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
VkViewport viewport = vks::initializers::viewport((float)width, (float)height, 0.0f, 1.0f);
vkCmdSetViewport(drawCmdBuffers[i], 0, 1, &viewport);
VkRect2D scissor = vks::initializers::rect2D(width, height, 0, 0);
vkCmdSetScissor(drawCmdBuffers[i], 0, 1, &scissor);
VkDeviceSize offsets[1] = { 0 };
vkCmdWriteTimestamp(drawCmdBuffers[i], VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, queryPool, 0);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines[pipelineIndex]);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSet, 0, NULL);
vkCmdBindVertexBuffers(drawCmdBuffers[i], 0, 1, &models.objects[models.objectIndex].vertices.buffer, offsets);
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.objects[models.objectIndex].indices.buffer, 0, VK_INDEX_TYPE_UINT32);
for (uint32_t y = 0; y < gridSize; y++) {
for (uint32_t x = 0; x < gridSize; x++) {
glm::vec3 pos = glm::vec3(float(x - (gridSize / 2.0f)) * 2.5f, 0.0f, float(y - (gridSize / 2.0f)) * 2.5f);
vkCmdPushConstants(drawCmdBuffers[i], pipelineLayout, VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(glm::vec3), &pos);
vkCmdDrawIndexed(drawCmdBuffers[i], models.objects[models.objectIndex].indexCount, 1, 0, 0, 0);
}
}
vkCmdWriteTimestamp(drawCmdBuffers[i], VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, queryPool, 1);
vkCmdWriteTimestamp(drawCmdBuffers[i], VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, queryPool, 2);
vkCmdEndRenderPass(drawCmdBuffers[i]);
VK_CHECK_RESULT(vkEndCommandBuffer(drawCmdBuffers[i]));
}
}
void draw()
{
VulkanExampleBase::prepareFrame();
submitInfo.commandBufferCount = 1;
submitInfo.pCommandBuffers = &drawCmdBuffers[currentBuffer];
VK_CHECK_RESULT(vkQueueSubmit(queue, 1, &submitInfo, VK_NULL_HANDLE));
// Read query results for displaying in next frame
getQueryResults();
VulkanExampleBase::submitFrame();
}
void loadAssets()
{
// Skybox
// models.skybox.loadFromFile(getAssetPath() + "models/cube.obj", vertexLayout, 1.0f, vulkanDevice, queue);
// Objects
std::vector<std::string> filenames = { "geosphere.obj", "teapot.dae", "torusknot.obj", "venus.fbx" };
for (auto file : filenames) {
vks::Model model;
model.loadFromFile(getAssetPath() + "models/" + file, vertexLayout, OBJ_DIM * (file == "venus.fbx" ? 3.0f : 1.0f), vulkanDevice, queue);
models.objects.push_back(model);
}
models.names = { "Sphere", "Teapot", "Torusknot", "Venus" };
}
void setupDescriptorPool()
{
std::vector<VkDescriptorPoolSize> poolSizes =
{
// One uniform buffer block for each mesh
vks::initializers::descriptorPoolSize(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 3)
};
VkDescriptorPoolCreateInfo descriptorPoolInfo =
vks::initializers::descriptorPoolCreateInfo(
poolSizes.size(),
poolSizes.data(),
3);
VK_CHECK_RESULT(vkCreateDescriptorPool(device, &descriptorPoolInfo, nullptr, &descriptorPool));
}
void setupDescriptorSetLayout()
{
std::vector<VkDescriptorSetLayoutBinding> setLayoutBindings = {
vks::initializers::descriptorSetLayoutBinding(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_SHADER_STAGE_VERTEX_BIT, 0)
};
VkDescriptorSetLayoutCreateInfo descriptorLayout =
vks::initializers::descriptorSetLayoutCreateInfo(setLayoutBindings);
VK_CHECK_RESULT(vkCreateDescriptorSetLayout(device, &descriptorLayout, nullptr, &descriptorSetLayout));
VkPipelineLayoutCreateInfo pipelineLayoutCreateInfo =
vks::initializers::pipelineLayoutCreateInfo(&descriptorSetLayout, 1);
VkPushConstantRange pushConstantRange = vks::initializers::pushConstantRange(VK_SHADER_STAGE_VERTEX_BIT, sizeof(glm::vec3), 0);
pipelineLayoutCreateInfo.pushConstantRangeCount = 1;
pipelineLayoutCreateInfo.pPushConstantRanges = &pushConstantRange;
VK_CHECK_RESULT(vkCreatePipelineLayout(device, &pipelineLayoutCreateInfo, nullptr, &pipelineLayout));
}
void setupDescriptorSets()
{
VkDescriptorSetAllocateInfo allocInfo =
vks::initializers::descriptorSetAllocateInfo(descriptorPool, &descriptorSetLayout, 1);
VK_CHECK_RESULT(vkAllocateDescriptorSets(device, &allocInfo, &descriptorSet));
std::vector<VkWriteDescriptorSet> writeDescriptorSets = {
vks::initializers::writeDescriptorSet(descriptorSet, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 0, &uniformBuffers.VS.descriptor)
};
vkUpdateDescriptorSets(device, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, NULL);
}
void preparePipelines()
{
VkPipelineInputAssemblyStateCreateInfo inputAssemblyState =
vks::initializers::pipelineInputAssemblyStateCreateInfo(
VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST,
0,
VK_FALSE);
VkPipelineRasterizationStateCreateInfo rasterizationState =
vks::initializers::pipelineRasterizationStateCreateInfo(
VK_POLYGON_MODE_FILL,
VK_CULL_MODE_NONE,
VK_FRONT_FACE_CLOCKWISE,
0);
VkPipelineColorBlendAttachmentState blendAttachmentState =
vks::initializers::pipelineColorBlendAttachmentState(
0xf,
VK_FALSE);
VkPipelineColorBlendStateCreateInfo colorBlendState =
vks::initializers::pipelineColorBlendStateCreateInfo(
1,
&blendAttachmentState);
VkPipelineDepthStencilStateCreateInfo depthStencilState =
vks::initializers::pipelineDepthStencilStateCreateInfo(
VK_TRUE,
VK_TRUE,
VK_COMPARE_OP_LESS_OR_EQUAL);
VkPipelineViewportStateCreateInfo viewportState =
vks::initializers::pipelineViewportStateCreateInfo(1, 1, 0);
VkPipelineMultisampleStateCreateInfo multisampleState =
vks::initializers::pipelineMultisampleStateCreateInfo(
VK_SAMPLE_COUNT_1_BIT,
0);
std::vector<VkDynamicState> dynamicStateEnables = {
VK_DYNAMIC_STATE_VIEWPORT,
VK_DYNAMIC_STATE_SCISSOR
};
VkPipelineDynamicStateCreateInfo dynamicState =
vks::initializers::pipelineDynamicStateCreateInfo(
dynamicStateEnables.data(),
dynamicStateEnables.size(),
0);
VkGraphicsPipelineCreateInfo pipelineCreateInfo =
vks::initializers::pipelineCreateInfo(
pipelineLayout,
renderPass,
0);
std::array<VkPipelineShaderStageCreateInfo, 2> shaderStages;
pipelineCreateInfo.pInputAssemblyState = &inputAssemblyState;
pipelineCreateInfo.pRasterizationState = &rasterizationState;
pipelineCreateInfo.pColorBlendState = &colorBlendState;
pipelineCreateInfo.pMultisampleState = &multisampleState;
pipelineCreateInfo.pViewportState = &viewportState;
pipelineCreateInfo.pDepthStencilState = &depthStencilState;
pipelineCreateInfo.pDynamicState = &dynamicState;
pipelineCreateInfo.stageCount = shaderStages.size();
pipelineCreateInfo.pStages = shaderStages.data();
// Vertex bindings and attributes
std::vector<VkVertexInputBindingDescription> vertexInputBindings = {
vks::initializers::vertexInputBindingDescription(0, vertexLayout.stride(), VK_VERTEX_INPUT_RATE_VERTEX)
};
std::vector<VkVertexInputAttributeDescription> vertexInputAttributes = {
vks::initializers::vertexInputAttributeDescription(0, 0, VK_FORMAT_R32G32B32_SFLOAT, 0), // Location 0 : Position
vks::initializers::vertexInputAttributeDescription(0, 1, VK_FORMAT_R32G32B32_SFLOAT, sizeof(float) * 3), // Location 1 : Normal
vks::initializers::vertexInputAttributeDescription(0, 2, VK_FORMAT_R32G32B32_SFLOAT, sizeof(float) * 6) // Location 3 : Color
};
VkPipelineVertexInputStateCreateInfo vertexInputState = vks::initializers::pipelineVertexInputStateCreateInfo();
vertexInputState.vertexBindingDescriptionCount = static_cast<uint32_t>(vertexInputBindings.size());
vertexInputState.pVertexBindingDescriptions = vertexInputBindings.data();
vertexInputState.vertexAttributeDescriptionCount = static_cast<uint32_t>(vertexInputAttributes.size());
vertexInputState.pVertexAttributeDescriptions = vertexInputAttributes.data();
pipelineCreateInfo.pVertexInputState = &vertexInputState;
pipelines.resize(3);
// Phong shading
shaderStages[0] = loadShader(getAssetPath() + "shaders/timestampquery/mesh.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getAssetPath() + "shaders/timestampquery/mesh.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCreateInfo, nullptr, &pipelines[0]));
// Color only
shaderStages[0] = loadShader(getAssetPath() + "shaders/timestampquery/simple.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getAssetPath() + "shaders/timestampquery/simple.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
rasterizationState.cullMode = VK_CULL_MODE_NONE;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCreateInfo, nullptr, &pipelines[1]));
// Blending
shaderStages[0] = loadShader(getAssetPath() + "shaders/timestampquery/occluder.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getAssetPath() + "shaders/timestampquery/occluder.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
rasterizationState.cullMode = VK_CULL_MODE_FRONT_BIT;
blendAttachmentState.blendEnable = VK_TRUE;
blendAttachmentState.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT;
blendAttachmentState.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA;
blendAttachmentState.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA;
blendAttachmentState.colorBlendOp = VK_BLEND_OP_ADD;
blendAttachmentState.srcAlphaBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA;
blendAttachmentState.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO;
blendAttachmentState.alphaBlendOp = VK_BLEND_OP_ADD;
depthStencilState.depthWriteEnable = VK_FALSE;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCreateInfo, nullptr, &pipelines[2]));
pipelineNames = { "Shaded", "Color only", "Blending" };
}
// Prepare and initialize uniform buffer containing shader uniforms
void prepareUniformBuffers()
{
// Vertex shader uniform buffer block
VK_CHECK_RESULT(vulkanDevice->createBuffer(
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT,
&uniformBuffers.VS,
sizeof(uboVS)));
// Map persistent
VK_CHECK_RESULT(uniformBuffers.VS.map());
updateUniformBuffers();
}
void updateUniformBuffers()
{
uboVS.projection = camera.matrices.perspective;
uboVS.modelview = camera.matrices.view;
memcpy(uniformBuffers.VS.mapped, &uboVS, sizeof(uboVS));
}
void prepare()
{
VulkanExampleBase::prepare();
loadAssets();
setupQueryPool();
prepareUniformBuffers();
setupDescriptorSetLayout();
preparePipelines();
setupDescriptorPool();
setupDescriptorSets();
buildCommandBuffers();
prepared = true;
}
virtual void render()
{
if (!prepared)
return;
draw();
}
virtual void viewChanged()
{
updateUniformBuffers();
}
virtual void OnUpdateUIOverlay(vks::UIOverlay *overlay)
{
if (overlay->header("Settings")) {
if (overlay->comboBox("Object type", &models.objectIndex, models.names)) {
updateUniformBuffers();
buildCommandBuffers();
}
if (overlay->comboBox("Pipeline", &pipelineIndex, pipelineNames)) {
buildCommandBuffers();
}
if (overlay->sliderInt("Grid size", &gridSize, 1, 10)) {
buildCommandBuffers();
}
}
if (overlay->header("Timings")) {
if (!timings.empty()) {
overlay->text("Frame start to VS = %.3f", timings[0]);
overlay->text("VS to FS = %.3f", timings[1]);
}
}
}
};
VULKAN_EXAMPLE_MAIN()

View file

@ -28,7 +28,7 @@
#include "vulkanexamplebase.h"
// Set to "true" to enable Vulkan's validation layers (see vulkandebug.cpp for details)
#define ENABLE_VALIDATION false
#define ENABLE_VALIDATION true
// Set to "true" to use staging buffers for uploading vertex and index data to device local memory
// See "prepareVertices" for details on what's staging and on why to use it
#define USE_STAGING true